Lu, Zhiyong
2012-01-01
Today’s biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/ PMID:23160414
The Distinction Between Curative and Assistive Technology.
Stramondo, Joseph A
2018-05-01
Disability activists have sometimes claimed their disability has actually increased their well-being. Some even say they would reject a cure to keep these gains. Yet, these same activists often simultaneously propose improvements to the quality and accessibility of assistive technology. However, for any argument favoring assistive over curative technology (or vice versa) to work, there must be a coherent distinction between the two. This line is already vague and will become even less clear with the emergence of novel technologies. This paper asks and tries to answer the question: what is it about the paradigmatic examples of curative and assistive technologies that make them paradigmatic and how can these defining features help us clarify the hard cases? This analysis will begin with an argument that, while the common views of this distinction adequately explain the paradigmatic cases, they fail to accurately pick out the relevant features of those technologies that make them paradigmatic and to provide adequate guidance for parsing the hard cases. Instead, it will be claimed that these categories of curative or assistive technologies are defined by the role the technologies play in establishing a person's relational narrative identity as a member of one of two social groups: disabled people or non-disabled people.
1984-10-01
164 W.O. Smith, L.A. Codispoti and S.L. Smith Biological Production ................................................. 168 H.-J. Neubert ...in the vicinity of the Kvit Bj orn. 78 Ii The MIZEX-84 High Frequency Accelerometer Study Paul K. Becker and Seelye Martin The field portion of the...w,-9 tested. 4 Natural Tritium Content Hanns-J. Neubert * ~ At station no. 333 (see Fiq.) a hole of’ >7000 m was f’ound. To qet inf’ormation about
Recommendations for Locus-Specific Databases and Their Curation
Cotton, R.G.H.; Auerbach, A.D.; Beckmann, J.S.; Blumenfeld, O.O.; Brookes, A.J.; Brown, A.F.; Carrera, P.; Cox, D.W.; Gottlieb, B.; Greenblatt, M.S.; Hilbert, P.; Lehvaslaiho, H.; Liang, P.; Marsh, S.; Nebert, D.W.; Povey, S.; Rossetti, S.; Scriver, C.R.; Summar, M.; Tolan, D.R.; Verma, I.C.; Vihinen, M.; den Dunnen, J.T.
2009-01-01
Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome. PMID:18157828
Recommendations for locus-specific databases and their curation.
Cotton, R G H; Auerbach, A D; Beckmann, J S; Blumenfeld, O O; Brookes, A J; Brown, A F; Carrera, P; Cox, D W; Gottlieb, B; Greenblatt, M S; Hilbert, P; Lehvaslaiho, H; Liang, P; Marsh, S; Nebert, D W; Povey, S; Rossetti, S; Scriver, C R; Summar, M; Tolan, D R; Verma, I C; Vihinen, M; den Dunnen, J T
2008-01-01
Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome. (c) 2007 Wiley-Liss, Inc.
The Role of Community-Driven Data Curation for Enterprises
NASA Astrophysics Data System (ADS)
Curry, Edward; Freitas, Andre; O'Riáin, Sean
With increased utilization of data within their operational and strategic processes, enterprises need to ensure data quality and accuracy. Data curation is a process that can ensure the quality of data and its fitness for use. Traditional approaches to curation are struggling with increased data volumes, and near real-time demands for curated data. In response, curation teams have turned to community crowd-sourcing and semi-automatedmetadata tools for assistance. This chapter provides an overview of data curation, discusses the business motivations for curating data and investigates the role of community-based data curation, focusing on internal communities and pre-competitive data collaborations. The chapter is supported by case studies from Wikipedia, The New York Times, Thomson Reuters, Protein Data Bank and ChemSpider upon which best practices for both social and technical aspects of community-driven data curation are described.
A Window to the World: Lessons Learned from NASA's Collaborative Metadata Curation Effort
NASA Astrophysics Data System (ADS)
Bugbee, K.; Dixon, V.; Baynes, K.; Shum, D.; le Roux, J.; Ramachandran, R.
2017-12-01
Well written descriptive metadata adds value to data by making data easier to discover as well as increases the use of data by providing the context or appropriateness of use. While many data centers acknowledge the importance of correct, consistent and complete metadata, allocating resources to curate existing metadata is often difficult. To lower resource costs, many data centers seek guidance on best practices for curating metadata but struggle to identify those recommendations. In order to assist data centers in curating metadata and to also develop best practices for creating and maintaining metadata, NASA has formed a collaborative effort to improve the Earth Observing System Data and Information System (EOSDIS) metadata in the Common Metadata Repository (CMR). This effort has taken significant steps in building consensus around metadata curation best practices. However, this effort has also revealed gaps in EOSDIS enterprise policies and procedures within the core metadata curation task. This presentation will explore the mechanisms used for building consensus on metadata curation, the gaps identified in policies and procedures, the lessons learned from collaborating with both the data centers and metadata curation teams, and the proposed next steps for the future.
Automating curation using a natural language processing pipeline
Alex, Beatrice; Grover, Claire; Haddow, Barry; Kabadjov, Mijail; Klein, Ewan; Matthews, Michael; Tobin, Richard; Wang, Xinglong
2008-01-01
Background: The tasks in BioCreative II were designed to approximate some of the laborious work involved in curating biomedical research papers. The approach to these tasks taken by the University of Edinburgh team was to adapt and extend the existing natural language processing (NLP) system that we have developed as part of a commercial curation assistant. Although this paper concentrates on using NLP to assist with curation, the system can be equally employed to extract types of information from the literature that is immediately relevant to biologists in general. Results: Our system was among the highest performing on the interaction subtasks, and competitive performance on the gene mention task was achieved with minimal development effort. For the gene normalization task, a string matching technique that can be quickly applied to new domains was shown to perform close to average. Conclusion: The technologies being developed were shown to be readily adapted to the BioCreative II tasks. Although high performance may be obtained on individual tasks such as gene mention recognition and normalization, and document classification, tasks in which a number of components must be combined, such as detection and normalization of interacting protein pairs, are still challenging for NLP systems. PMID:18834488
NASA Technical Reports Server (NTRS)
Zolensky, Michael; Nakamura-Messenger, Keiko; Fletcher, Lisa; See, Thomas
2008-01-01
We briefly describe some of the challenges to the Stardust mission, curation and sample preliminary analysis, from the perspective of the Curation Office at the Johnson Space Center. Our goal is to inform persons planning future sample returns, so that they may learn from both our successes and challenges (and avoid some of our mistakes). The Curation office played a role in the mission from its inception, most critically assisting in the design and implementation of the spacecraft contamination control plan, and in planning and documenting the recovery of the spacecraft reentry capsule in Utah. A unique class 100 cleanroom was built to maintain the returned comet and interstellar samples in clean comfort, and to permit dissection and allocation of samples for analysis.
On expert curation and scalability: UniProtKB/Swiss-Prot as a case study
Arighi, Cecilia N; Magrane, Michele; Bateman, Alex; Wei, Chih-Hsuan; Lu, Zhiyong; Boutet, Emmanuel; Bye-A-Jee, Hema; Famiglietti, Maria Livia; Roechert, Bernd; UniProt Consortium, The
2017-01-01
Abstract Motivation Biological knowledgebases, such as UniProtKB/Swiss-Prot, constitute an essential component of daily scientific research by offering distilled, summarized and computable knowledge extracted from the literature by expert curators. While knowledgebases play an increasingly important role in the scientific community, their ability to keep up with the growth of biomedical literature is under scrutiny. Using UniProtKB/Swiss-Prot as a case study, we address this concern via multiple literature triage approaches. Results With the assistance of the PubTator text-mining tool, we tagged more than 10 000 articles to assess the ratio of papers relevant for curation. We first show that curators read and evaluate many more papers than they curate, and that measuring the number of curated publications is insufficient to provide a complete picture as demonstrated by the fact that 8000–10 000 papers are curated in UniProt each year while curators evaluate 50 000–70 000 papers per year. We show that 90% of the papers in PubMed are out of the scope of UniProt, that a maximum of 2–3% of the papers indexed in PubMed each year are relevant for UniProt curation, and that, despite appearances, expert curation in UniProt is scalable. Availability and implementation UniProt is freely available at http://www.uniprot.org/. Contact sylvain.poux@sib.swiss Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036270
Kim, Sun; Chatr-aryamontri, Andrew; Chang, Christie S.; Oughtred, Rose; Rust, Jennifer; Wilbur, W. John; Comeau, Donald C.; Dolinski, Kara; Tyers, Mike
2017-01-01
A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein–protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future uses of the BioC-BioGRID corpus are detailed in this report. Database URL: http://bioc.sourceforge.net/BioC-BioGRID.html PMID:28077563
MET network in PubMed: a text-mined network visualization and curation system.
Dai, Hong-Jie; Su, Chu-Hsien; Lai, Po-Ting; Huang, Ming-Siang; Jonnagaddala, Jitendra; Rose Jue, Toni; Rao, Shruti; Chou, Hui-Jou; Milacic, Marija; Singh, Onkar; Syed-Abdul, Shabbir; Hsu, Wen-Lian
2016-01-01
Metastasis is the dissemination of a cancer/tumor from one organ to another, and it is the most dangerous stage during cancer progression, causing more than 90% of cancer deaths. Improving the understanding of the complicated cellular mechanisms underlying metastasis requires investigations of the signaling pathways. To this end, we developed a METastasis (MET) network visualization and curation tool to assist metastasis researchers retrieve network information of interest while browsing through the large volume of studies in PubMed. MET can recognize relations among genes, cancers, tissues and organs of metastasis mentioned in the literature through text-mining techniques, and then produce a visualization of all mined relations in a metastasis network. To facilitate the curation process, MET is developed as a browser extension that allows curators to review and edit concepts and relations related to metastasis directly in PubMed. PubMed users can also view the metastatic networks integrated from the large collection of research papers directly through MET. For the BioCreative 2015 interactive track (IAT), a curation task was proposed to curate metastatic networks among PubMed abstracts. Six curators participated in the proposed task and a post-IAT task, curating 963 unique metastatic relations from 174 PubMed abstracts using MET.Database URL: http://btm.tmu.edu.tw/metastasisway. © The Author(s) 2016. Published by Oxford University Press.
Overview of the gene ontology task at BioCreative IV.
Mao, Yuqing; Van Auken, Kimberly; Li, Donghui; Arighi, Cecilia N; McQuilton, Peter; Hayman, G Thomas; Tweedie, Susan; Schaeffer, Mary L; Laulederkind, Stanley J F; Wang, Shur-Jen; Gobeill, Julien; Ruch, Patrick; Luu, Anh Tuan; Kim, Jung-Jae; Chiang, Jung-Hsien; Chen, Yu-De; Yang, Chia-Jung; Liu, Hongfang; Zhu, Dongqing; Li, Yanpeng; Yu, Hong; Emadzadeh, Ehsan; Gonzalez, Graciela; Chen, Jian-Ming; Dai, Hong-Jie; Lu, Zhiyong
2014-01-01
Gene ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
Overview of the interactive task in BioCreative V
Wang, Qinghua; S. Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I.; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K.; Jensen, Lars J.; Jimenez, Silvia; Jue, Toni R.; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M.; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D.; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J. F.; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H.; Hirschman, Lynette; Arighi, Cecilia N.
2016-01-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested. Database URL: http://www.biocreative.org PMID:27589961
Overview of the interactive task in BioCreative V
Wang, Qinghua; Abdul, Shabbir S.; Almeida, Lara; ...
2016-09-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a formatmore » similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. Here, the partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested.« less
Rocca-Serra, Philippe; Brandizi, Marco; Maguire, Eamonn; Sklyar, Nataliya; Taylor, Chris; Begley, Kimberly; Field, Dawn; Harris, Stephen; Hide, Winston; Hofmann, Oliver; Neumann, Steffen; Sterk, Peter; Tong, Weida; Sansone, Susanna-Assunta
2010-01-01
Summary: The first open source software suite for experimentalists and curators that (i) assists in the annotation and local management of experimental metadata from high-throughput studies employing one or a combination of omics and other technologies; (ii) empowers users to uptake community-defined checklists and ontologies; and (iii) facilitates submission to international public repositories. Availability and Implementation: Software, documentation, case studies and implementations at http://www.isa-tools.org Contact: isatools@googlegroups.com PMID:20679334
Sakaguchi, Masazumi; Kan, Takatsugu; Tsubono, Michihiko; Kii, Eiji
2014-04-01
Here we report 2 cases of curative resection following preoperative chemotherapy with bevacizumab for locally advanced colon cancer. Case 1 was a 62-year-old man admitted with constipation, abdominal distention, and abdominal pain. An abdominal computed tomography(CT)scan revealed an obstructive tumor of the sigmoid colon with invasion into the bladder. A diverting colostomy was performed, and chemotherapy with mFOLFOX6(infusional 5-fluorouracil/Leucovorin+ oxaliplatin) plus bevacizumab was initiated. The tumor shrunk markedly after 6 courses of this treatment. Thereafter, laparoscopy- assisted sigmoidectomy was successfully performed. Case 2 was a 61-year-old woman admitted with diarrhea, abdominal pain, and fever. An abdominal CT scan revealed an obstructive tumor of the sigmoid colon with invasion into the ileum, uterus and retroperitoneum. A diverting colostomy was performed, and chemotherapy with XELOX(capecitabine+ oxaliplatin)plus bevacizumab was initiated. The tumor shrunk markedly after 6 courses of this treatment. Thereafter, laparoscopy- assisted sigmoidectomy was successfully performed. Both cases demonstrated partial clinical responses to chemotherapy; thus, curative resection surgeries were performed. There were no perioperative complications. Therefore, we conclude that oxaliplatin-based chemotherapy plus bevacizumab and laparoscopic resection could be very effective for locally advanced colon cancer.
Overview of the interactive task in BioCreative V.
Wang, Qinghua; S Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K; Jensen, Lars J; Jimenez, Silvia; Jue, Toni R; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J F; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H; Hirschman, Lynette; Arighi, Cecilia N
2016-01-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested.Database URL: http://www.biocreative.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Research Data Management Self-Education for Librarians: A Webliography
ERIC Educational Resources Information Center
Goben, Abigail; Raszewski, Rebecca
2015-01-01
As data as a scholarly object continues to grow in importance in the research community, librarians are undertaking increasing responsibilities regarding data management and curation. New library initiatives include assisting researchers in finding data sets for reuse; locating and hosting repositories for required archiving; consultations on…
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl
2016-01-01
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less
The BioCyc collection of microbial genomes and metabolic pathways.
Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi
2017-08-17
BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; ...
2016-01-01
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less
Yagub, Abdallah I A; Mtshali, Khondlo
2015-09-01
Conflict in North Darfur state, Western Sudan started in 2003, and the delivering of curative health services was becoming a greater challenge for the country's limited resources. NGOs have played an important role in providing curative health services. To examine the role that Non-Governmental Organizations (NGOs) have played in providing curative health services, as well as to identify the difficulties and challenges that affect NGOs in delivering curative health services. Secondary data was collected from different sources, including government offices and medical organizations in Sudan and in North Darfur state. Primary data was obtained through interviews with government and NGOs representatives. The interviews were conducted with (1) expatriates working for international NGOs (N=15) (2) health professionals and administrators working in health sector (N= 45) in the period from November 2010 to January 2011. The government in North Darfur state spent 70% of its financial budget on security, while it spent it less than 1% on providing health services. The international NGOs have been providing 70% of curative health services to the State's population by contributing 52.9% of the health budget and 1 390 health personnel. Since 2003 NGOs have provided technical assistance to the health staff. As a result, more than fifty nurses have been trained to provide care and treatment, more than twenty-three doctors have been trained in laboratory equipment operation, and approximately six senior doctors and hospital directors have received management training. NGOs have been managing and supporting 89 public health facilities, and established 24 health centres in IDP camps, and 20 health centres across all the districts in North Darfur state. The NGOs have played an important role in providing curative health services and in establishing good health facilities, but a future problem is how the government will run these health facilities after a peaceful settlement has been reached which might cause NGOs to leave the region.
Winslow, Ksenia; Ho, Andrew; Fortney, Kristen; Morgen, Eric
2017-01-01
Biomarkers of all-cause mortality are of tremendous clinical and research interest. Because of the long potential duration of prospective human lifespan studies, such biomarkers can play a key role in quantifying human aging and quickly evaluating any potential therapies. Decades of research into mortality biomarkers have resulted in numerous associations documented across hundreds of publications. Here, we present MortalityPredictors.org, a manually-curated, publicly accessible database, housing published, statistically-significant relationships between biomarkers and all-cause mortality in population-based or generally healthy samples. To gather the information for this database, we searched PubMed for appropriate research papers and then manually curated relevant data from each paper. We manually curated 1,576 biomarker associations, involving 471 distinct biomarkers. Biomarkers ranged in type from hematologic (red blood cell distribution width) to molecular (DNA methylation changes) to physical (grip strength). Via the web interface, the resulting data can be easily browsed, searched, and downloaded for further analysis. MortalityPredictors.org provides comprehensive results on published biomarkers of human all-cause mortality that can be used to compare biomarkers, facilitate meta-analysis, assist with the experimental design of aging studies, and serve as a central resource for analysis. We hope that it will facilitate future research into human mortality and aging. PMID:28858850
Peto, Maximus V; De la Guardia, Carlos; Winslow, Ksenia; Ho, Andrew; Fortney, Kristen; Morgen, Eric
2017-08-31
Biomarkers of all-cause mortality are of tremendous clinical and research interest. Because of the long potential duration of prospective human lifespan studies, such biomarkers can play a key role in quantifying human aging and quickly evaluating any potential therapies. Decades of research into mortality biomarkers have resulted in numerous associations documented across hundreds of publications. Here, we present MortalityPredictors.org , a manually-curated, publicly accessible database, housing published, statistically-significant relationships between biomarkers and all-cause mortality in population-based or generally healthy samples. To gather the information for this database, we searched PubMed for appropriate research papers and then manually curated relevant data from each paper. We manually curated 1,576 biomarker associations, involving 471 distinct biomarkers. Biomarkers ranged in type from hematologic (red blood cell distribution width) to molecular (DNA methylation changes) to physical (grip strength). Via the web interface, the resulting data can be easily browsed, searched, and downloaded for further analysis. MortalityPredictors.org provides comprehensive results on published biomarkers of human all-cause mortality that can be used to compare biomarkers, facilitate meta-analysis, assist with the experimental design of aging studies, and serve as a central resource for analysis. We hope that it will facilitate future research into human mortality and aging.
An overview of the BioCreative 2012 Workshop Track III: interactive text mining task
Arighi, Cecilia N.; Carterette, Ben; Cohen, K. Bretonnel; Krallinger, Martin; Wilbur, W. John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E.; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L.; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P.; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O.; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy
2013-01-01
In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators’ overall experience of a system, regardless of the system’s high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV. PMID:23327936
An overview of the BioCreative 2012 Workshop Track III: interactive text mining task.
Arighi, Cecilia N; Carterette, Ben; Cohen, K Bretonnel; Krallinger, Martin; Wilbur, W John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy
2013-01-01
In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators' overall experience of a system, regardless of the system's high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV.
Murakami, Keiko; Aida, Jun; Ohkubo, Takayoshi; Hashimoto, Hideki
2014-09-19
Preventive dental care use remains relatively low in Japan, especially among working-age adults. Universal health insurance in Japan covers curative dental care with an out-of-pocket payment limit, though its coverage of preventive dental care is limited. The aim of this study was to test the hypothesis that income inequality in dental care use is found in preventive, but not curative dental care among working-age Japanese adults. A cross-sectional survey was conducted using a computer-assisted, self-administered format for community residents aged 25-50 years. In all, 4357 residents agreed to participate and complete the questionnaire (valid response rate: 31.3%). Preventive dental care use was measured according to whether the participant had visited a dentist or a dental hygienist during the past year for dental scaling or fluoride or orthodontic treatments. Curative dental care use was assessed by dental visits for other reasons. The main explanatory variable was equivalent household income. Logistic regression analyses with linear trend tests were conducted to determine whether there were significant income-related gradients with curative or preventive dental care use. Among the respondents, 40.0% of men and 41.5% of women had used curative dental care in the past year; 24.1% of men and 34.1% of women had used preventive care. We found no significant income-related gradients of curative dental care among either men or women (p = 0.234 and p = 0.270, respectively). Significant income-related gradients of preventive care were observed among both men and women (p < 0.001 and p = 0.003, respectively). Among women, however, income-related differences were no longer significant (p = 0.126) after adjusting for education and other covariates. Compared with men with the lowest income, the highest-income group had a 1.79-fold significantly higher probability for using preventive dental care. The prevalence of preventive dental care use was lower than that of curative care. The results showed income-related inequality in preventive dental care use among men, though there were no significant income-related gradients of curative dental care use among either men or women. Educational attainment had a positive association with preventive dental care use only among women.
USDA-ARS?s Scientific Manuscript database
Ongoing regulatory changes are eliminating or restricting the use of broad-spectrum insecticides in fruit crops in the USA, and current IPM programs for plum curculio, Conotrachelus nenuphar (Herbst), in highbush blueberries, Vaccinium corymbosum L, need to address these changes. To assist in this ...
Wilbur, W. John
2012-01-01
The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical–gene interactions, chemical–disease relationships and gene–disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein–protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team. PMID:23160415
Kim, Sun; Kim, Won; Wei, Chih-Hsuan; Lu, Zhiyong; Wilbur, W John
2012-01-01
The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical-gene interactions, chemical-disease relationships and gene-disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein-protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team.
Rep. Barrow, John [D-GA-12
2010-05-12
Senate - 12/14/2010 Placed on Senate Legislative Calendar under General Orders. Calendar No. 694. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
The Role of GIS and Data Librarians in Cyber-infrastructure Support and Governance
NASA Astrophysics Data System (ADS)
Branch, B. D.
2012-12-01
A governance road-map for cyber-infrastructure in the geosciences will include an intentional librarian core capable of technical skills that include GIS and open source support for data curation that involves all aspects of data life cycle management. Per Executive Order 12906 and other policy; spatial data, literacy, and curation are critical cyber-infrastructure needs in the near future. A formal earth science and space informatics librarian may be an outcome of such development. From e-science to e-research, STEM pipelines need librarians as critical data intermediaries in technical assistance and collaboration efforts with scientists' data and outreach needs. Future training concerns should advocate trans-disciplinary data science and policy skills that will be necessary for data management support and procurement.
Kimura, Kei; Kagawa, Yoshinori; Kato, Takeshi; Ishida, Tomo; Morimoto, Yoshihiro; Matusita, Katsunori; Kusama, Hiroki; Hashimoto, Tadayoshi; Katura, Yoshiteru; Nitta, Kanae; Takeno, Atushi; Nakahira, Shin; Okishiro, Masatsugu; Sakisaka, Hideki; Taniguchi, Hirokazu; Egawa, Chiyomi; Takeda, Yutaka; Tamura, Shigeyuki
2014-11-01
A-64-years-old woman with locally advanced rectal cancer, which had invaded the vagina, was referred to our hospital. She was administered neoadjuvant chemotherapy to reduce the tumor size. After 4 courses of chemotherapy consisting of folinic acid, fluorouracil, and oxaliplatin (mFOLFOX6), an enhanced computed tomography (CT) scan and magnetic resonance imaging (MRI) indicated marked tumor shrinkage. We performed a laparoscopically assisted low anterior resection, which included total mesorectal resection, resection of the vaginal posterior wall, and right lateral lymph node resection. The chemotherapy prevented us from having to create a permanent colostomy. The efficacy of the neoadjuvant chemotherapy was Grade 1b. We experienced a case of neoadjuvant chemotherapy followed by curative resection.
Jiang, Xiangying; Ringwald, Martin; Blake, Judith; Shatkay, Hagit
2017-01-01
The Gene Expression Database (GXD) is a comprehensive online database within the Mouse Genome Informatics resource, aiming to provide available information about endogenous gene expression during mouse development. The information stems primarily from many thousands of biomedical publications that database curators must go through and read. Given the very large number of biomedical papers published each year, automatic document classification plays an important role in biomedical research. Specifically, an effective and efficient document classifier is needed for supporting the GXD annotation workflow. We present here an effective yet relatively simple classification scheme, which uses readily available tools while employing feature selection, aiming to assist curators in identifying publications relevant to GXD. We examine the performance of our method over a large manually curated dataset, consisting of more than 25 000 PubMed abstracts, of which about half are curated as relevant to GXD while the other half as irrelevant to GXD. In addition to text from title-and-abstract, we also consider image captions, an important information source that we integrate into our method. We apply a captions-based classifier to a subset of about 3300 documents, for which the full text of the curated articles is available. The results demonstrate that our proposed approach is robust and effectively addresses the GXD document classification. Moreover, using information obtained from image captions clearly improves performance, compared to title and abstract alone, affirming the utility of image captions as a substantial evidence source for automatically determining the relevance of biomedical publications to a specific subject area. www.informatics.jax.org. © The Author(s) 2017. Published by Oxford University Press.
Harada, Hiroaki; Miyamoto, Kazuaki; Yamashita, Yoshinori; Taniyama, Kiyomi; Mihara, Kazuko; Nishimura, Mitsuki; Okada, Morihito
2015-10-01
Although curative resection is the current treatment of choice for localized non-small-cell lung cancer (NSCLC), patients show a wide spectrum of survival even after complete resection of pathological stage I NSCLC. Thus, identifying molecular biomarkers that help to accurately select patients at high risk of relapse is an important key to improving the treatment strategy. The purpose of this study was to evaluate the prognostic signature of protocadherin 10 (PCDH10) promoter methylation in curatively resected pathological stage I NSCLC. Using methylation-specific polymerase chain reaction assays, methylation of PCDH10 promoter was assessed in cancer tissues of 109 patients who underwent curative resection of pathological stage I NSCLC. Associations between PCDH10 methylation status and disease outcome was analyzed. PCDH10 promoter methylation was detected in 46/109 patients (42.2%). Patients with methylated PCDH10 showed significantly worse recurrence-free, overall, and disease-specific survival compared with those without methylation (P < 0.0001, P = 0.0004, P = 0.0002, respectively). Multivariate Cox proportional hazard regression analysis revealed that adjusted hazard ratios of methylated PCDH10 were 5.159 for recurrence-free, 1.817 for overall, and 5.478 for disease-specific survival (P = 0.0005, P = 0.1475, P = 0.0109, respectively). The pattern of recurrence was not significantly different between patients with and without PCDH10 methylation (P = 0.5074). PCDH10 methylation is a potential biomarker that predicts a poor prognosis after curative resection of pathological stage I NSCLC. Assessment of PCDH10 methylation status might assist in patient stratification for determining an appropriate adjuvant treatment and follow-up strategy. © 2015 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Perrier, Frederic; Nsengiyumva, Jean-Baptiste
2003-01-01
Constructivist, hands-on, inquiry-based, science activities may have a curative potential that could be valuable in a psychological assistance programme for child victims of violence and war. To investigate this idea, pilot sessions were performed in an orphanage located in Ruhengeri, Rwanda, with seven young adults and two groups of 11 children…
Altermann, Eric; Lu, Jingli; McCulloch, Alan
2017-01-01
Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use. PMID:28386247
Altermann, Eric; Lu, Jingli; McCulloch, Alan
2017-01-01
Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use.
Lee, Jay S; Parashar, Vartika; Miller, Jacquelyn B; Bremmer, Samantha M; Vu, Joceline V; Waljee, Jennifer F; Dossett, Lesly A
2018-07-01
Excessive opioid prescribing is common after curative-intent surgery, but little is known about what factors influence prescribing behaviors among surgeons. To identify targets for intervention, we performed a qualitative study of opioid prescribing after curative-intent surgery using the Theoretical Domains Framework, a well-established implementation science method for identifying factors influencing healthcare provider behavior. Prior to data collection, we constructed a semi-structured interview guide to explore decision making for opioid prescribing. We then conducted interviews with surgical oncology providers at a single comprehensive cancer center. Interviews were recorded, transcribed verbatim, then independently coded by two investigators using the Theoretical Domains Framework to identify theoretical domains relevant to opioid prescribing. Relevant domains were then linked to behavior models to select targeted interventions likely to improve opioid prescribing. Twenty-one subjects were interviewed from November 2016 to May 2017, including attending surgeons, resident surgeons, physician assistants, and nurses. Five theoretical domains emerged as relevant to opioid prescribing: environmental context and resources; social influences; beliefs about consequences; social/professional role and identity; and goals. Using these domains, three interventions were identified as likely to change opioid prescribing behavior: (1) enablement (deploy nurses during preoperative visits to counsel patients on opioid use); (2) environmental restructuring (provide on-screen prompts with normative data on the quantity of opioid prescribed); and (3) education (provide prescribing guidelines). Key determinants of opioid prescribing behavior after curative-intent surgery include environmental and social factors. Interventions targeting these factors are likely to improve opioid prescribing in surgical oncology.
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions
Jamieson, Daniel G.; Roberts, Phoebe M.; Robertson, David L.; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie ‘pain’, a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: ••• PMID:23707966
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions.
Jamieson, Daniel G; Roberts, Phoebe M; Robertson, David L; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie 'pain', a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: •••
Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts
He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang
2018-01-01
The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments. PMID:29375930
Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts.
He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang
2017-11-01
The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments.
Explicit and spontaneous retrieval of emotional scenes: electrophysiological correlates.
Weymar, Mathias; Bradley, Margaret M; El-Hinnawi, Nasryn; Lang, Peter J
2013-10-01
When event-related potentials (ERP) are measured during a recognition task, items that have previously been presented typically elicit a larger late (400-800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti, Karlsson, & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared with new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding.
Explicit and spontaneous retrieval of emotional scenes: Electrophysiological correlates
Weymar, Mathias; Bradley, Margaret M.; El-Hinnawi, Nasryn; Lang, Peter J.
2014-01-01
When event-related potentials are measured during a recognition task, items that have previously been presented typically elicit a larger late (400–800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared to new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when either making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding. PMID:23795588
Archaeological Salvage of the Joso Trestle Construction Camp, 45-FR-51 Lower Monumental Project.
1981-01-01
Manager, Marketing Department, Crescent Foods, Seattle; Terry Corrado Spyrison, Corrado Cutlery , Chicago; Jacqueline Stone, Assistant Curator, U.S...gold "D. Corrado, 204 No. Clark Str., Chicago, Ill." The Corrado Cutlery Co., which has been in business for some 75 years, reports that they probably...locating parties (McHenry 1903:75; Beahan 1904:87-88; Lavis 1906:43) can give some idea of the variety of edibles that might be represented in
Literature Mining of Pathogenesis-Related Proteins in Human Pathogens for Database Annotation
2009-10-01
Salmonella , and Shigella. In most cases the host is human, but may also include other mammal species. 2. Negative literature set of PH-PPIs. Of...cis.udel.edu The objective of Gallus Reactome is to provide a curated set of metabolic and signaling pathways for the chicken . To assist annotators...interested in papers that document pathways in the chicken , abstracts are classified according to the species that were the source of the experimental
Standards for Clinical Grade Genomic Databases.
Yohe, Sophia L; Carter, Alexis B; Pfeifer, John D; Crawford, James M; Cushman-Vokoun, Allison; Caughron, Samuel; Leonard, Debra G B
2015-11-01
Next-generation sequencing performed in a clinical environment must meet clinical standards, which requires reproducibility of all aspects of the testing. Clinical-grade genomic databases (CGGDs) are required to classify a variant and to assist in the professional interpretation of clinical next-generation sequencing. Applying quality laboratory standards to the reference databases used for sequence-variant interpretation presents a new challenge for validation and curation. To define CGGD and the categories of information contained in CGGDs and to frame recommendations for the structure and use of these databases in clinical patient care. Members of the College of American Pathologists Personalized Health Care Committee reviewed the literature and existing state of genomic databases and developed a framework for guiding CGGD development in the future. Clinical-grade genomic databases may provide different types of information. This work group defined 3 layers of information in CGGDs: clinical genomic variant repositories, genomic medical data repositories, and genomic medicine evidence databases. The layers are differentiated by the types of genomic and medical information contained and the utility in assisting with clinical interpretation of genomic variants. Clinical-grade genomic databases must meet specific standards regarding submission, curation, and retrieval of data, as well as the maintenance of privacy and security. These organizing principles for CGGDs should serve as a foundation for future development of specific standards that support the use of such databases for patient care.
Li, Zhao; Li, Jin; Yu, Peng
2018-01-01
Abstract Metadata curation has become increasingly important for biological discovery and biomedical research because a large amount of heterogeneous biological data is currently freely available. To facilitate efficient metadata curation, we developed an easy-to-use web-based curation application, GEOMetaCuration, for curating the metadata of Gene Expression Omnibus datasets. It can eliminate mechanical operations that consume precious curation time and can help coordinate curation efforts among multiple curators. It improves the curation process by introducing various features that are critical to metadata curation, such as a back-end curation management system and a curator-friendly front-end. The application is based on a commonly used web development framework of Python/Django and is open-sourced under the GNU General Public License V3. GEOMetaCuration is expected to benefit the biocuration community and to contribute to computational generation of biological insights using large-scale biological data. An example use case can be found at the demo website: http://geometacuration.yubiolab.org. Database URL: https://bitbucket.com/yubiolab/GEOMetaCuration PMID:29688376
A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies.
Ramos, Vitor; Morais, João; Vasconcelos, Vitor M
2017-04-25
The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies.
A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies
Ramos, Vitor; Morais, João; Vasconcelos, Vitor M.
2017-01-01
The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies. PMID:28440791
Text mining for neuroanatomy using WhiteText with an updated corpus and a new web application
French, Leon; Liu, Po; Marais, Olivia; Koreman, Tianna; Tseng, Lucia; Lai, Artemis; Pavlidis, Paul
2015-01-01
We describe the WhiteText project, and its progress towards automatically extracting statements of neuroanatomical connectivity from text. We review progress to date on the three main steps of the project: recognition of brain region mentions, standardization of brain region mentions to neuroanatomical nomenclature, and connectivity statement extraction. We further describe a new version of our manually curated corpus that adds 2,111 connectivity statements from 1,828 additional abstracts. Cross-validation classification within the new corpus replicates results on our original corpus, recalling 67% of connectivity statements at 51% precision. The resulting merged corpus provides 5,208 connectivity statements that can be used to seed species-specific connectivity matrices and to better train automated techniques. Finally, we present a new web application that allows fast interactive browsing of the over 70,000 sentences indexed by the system, as a tool for accessing the data and assisting in further curation. Software and data are freely available at http://www.chibi.ubc.ca/WhiteText/. PMID:26052282
Advanced Curation Preparation for Mars Sample Return and Cold Curation
NASA Technical Reports Server (NTRS)
Fries, M. D.; Harrington, A. D.; McCubbin, F. M.; Mitchell, J.; Regberg, A. B.; Snead, C.
2017-01-01
NASA Curation is tasked with the care and distribution of NASA's sample collections, such as the Apollo lunar samples and cometary material collected by the Stardust spacecraft. Curation is also mandated to perform Advanced Curation research and development, which includes improving the curation of existing collections as well as preparing for future sample return missions. Advanced Curation has identified a suite of technologies and techniques that will require attention ahead of Mars sample return (MSR) and missions with cold curation (CCur) requirements, perhaps including comet sample return missions.
Improve services project -- Republic of the Marshall Islands.
Langidrik, J
1995-01-01
The Republic of the Marshall Islands has 60 dispensary sites, each staffed by 1 health assistant, to cover 80-800 people/site on 34 atolls. Until the spring of 1994, only curative services were available on a regular basis, and preventive services were provided by traveling health teams from the urban centers. In 1994, the health assistants in selected outer islands were trained to administer immunizations from vaccines which are sent regularly by air. Additional project sites are being selected. In 1993, 2 dispensaries initiated a project to 1) increase the number of women with access to prenatal care during the first trimester, 2) increase immunization levels, 3) improve access to preventive services, and 4) improve reporting and record-keeping systems. This project includes an important training component for the health assistant, the wife of the health assistant, the traditional birth attendant, the youth peer educator, community leaders, and a member of the local council. By 1994, this project was expanded to 13 dispensaries on 2 atolls. In 1995, 18 more dispensaries on 4 more atolls will be able to offer these additional services.
Jung, Da Hyun; Lee, Yong Chan; Kim, Jie-Hyun; Lee, Sang Kil; Shin, Sung Kwan; Park, Jun Chul; Chung, Hyunsoo; Park, Jae Jun; Youn, Young Hoon; Park, Hyojin
2017-03-01
Endoscopic resection (ER) is accepted as a curative treatment option for selected cases of early gastric cancer (EGC). Although additional surgery is often recommended for patients who have undergone non-curative ER, clinicians are cautious when managing elderly patients with GC because of comorbid conditions. The aim of the study was to investigate clinical outcomes in elderly patients following non-curative ER with and without additive treatment. Subjects included 365 patients (>75 years old) who were diagnosed with EGC and underwent ER between 2007 and 2015. Clinical outcomes of three patient groups [curative ER (n = 246), non-curative ER with additive treatment (n = 37), non-curative ER without additive treatment (n = 82)] were compared. Among the patients who underwent non-curative ER with additive treatment, 28 received surgery, three received a repeat ER, and six experienced argon plasma coagulation. Patients who underwent non-curative ER alone were significantly older than those who underwent additive treatment. Overall 5-year survival rates in the curative ER, non-curative ER with treatment, and non-curative ER without treatment groups were 84, 86, and 69 %, respectively. No significant difference in overall survival was found between patients in the curative ER and non-curative ER with additive treatment groups. The non-curative ER groups were categorized by lymph node metastasis risk factors to create a high-risk group that exhibited positive lymphovascular invasion or deep submucosal invasion greater than SM2 and a low-risk group without risk factors. Overall 5-year survival rate was lowest (60 %) in the high-risk group with non-curative ER and no additive treatment. Elderly patients who underwent non-curative ER with additive treatment showed better survival outcome than those without treatment. Therefore, especially with LVI or deep submucosal invasion, additive treatment is recommended in patients undergoing non-curative ER, even if they are older than 75 years.
Curating NASA's Past, Present, and Future Extraterrestrial Sample Collections
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Allton, J. H.; Evans, C. A.; Fries, M. D.; Nakamura-Messenger, K.; Righter, K.; Zeigler, R. A.; Zolensky, M.; Stansbery, E. K.
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "...curation of all extra-terrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "...documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the past, present, and future activities of the NASA Curation Office.
Capturing Data Connections within the Climate Data Initiative to Support Resiliency
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Bugbee, K.; Weigel, A. M.; Tilmes, C.
2015-12-01
The Climate Data Initiative (CDI) focuses on preparing the United States for the impacts of climate change by leveraging existing federal climate-relevant data to stimulate innovation and private-sector entrepreneurship supporting national climate-change preparedness. To achieve these goals, relevant data was curated around seven thematic areas relevant to climate change resiliency. Data for each theme was selected by subject matter experts from various Federal agencies and collected in Data.gov at http://climate.data.gov. While the curation effort for each theme has been immensely valuable on its own, in the end, the themes essentially become a long directory or a list. Establishing valuable connections between datasets and their intended use is lost. Therefore, the user understands that the datasets in the list have been approved by the CDI subject matter experts but has less certainty when making connections between the various datasets and their possible applications. Additionally, the intended use of the curated list is overwhelming and can be difficult to interpret. In order to better address the needs of the CDI data end users, the CDI team has been developing a new controlled vocabulary that will assist in capturing connections between datasets. This new vocabulary will be implemented in the Global Change Information System (GCIS), which has the capability to link individual items within the system. This presentation will highlight the methodology used to develop the controlled vocabulary that will aid end users in both understanding and locating relevant datasets for their intended use.
Canto: an online tool for community literature curation.
Rutherford, Kim M; Harris, Midori A; Lock, Antonia; Oliver, Stephen G; Wood, Valerie
2014-06-15
Detailed curation of published molecular data is essential for any model organism database. Community curation enables researchers to contribute data from their papers directly to databases, supplementing the activity of professional curators and improving coverage of a growing body of literature. We have developed Canto, a web-based tool that provides an intuitive curation interface for both curators and researchers, to support community curation in the fission yeast database, PomBase. Canto supports curation using OBO ontologies, and can be easily configured for use with any species. Canto code and documentation are available under an Open Source license from http://curation.pombase.org/. Canto is a component of the Generic Model Organism Database (GMOD) project (http://www.gmod.org/). © The Author 2014. Published by Oxford University Press.
Dilmanian, F Avraham [Yaphank, NY; Anchel, David J [Rocky Point, NY; Gaudette, Glenn [Holden, MA; Romanelli, Pantaleo [Monteroduni, IT; Hainfeld, James [Shoreham, NY
2010-06-29
A method of assisting recovery of an injury site of the central nervous system (CNS) or treating a disease includes providing a therapeutic dose of X-ray radiation to a target volume through an array of parallel microplanar beams. The dose to treat CNS injury temporarily removes regeneration inhibitors from the irradiated site. Substantially unirradiated cells surviving between beams migrate to the in-beam portion and assist recovery. The dose may be staggered in fractions over sessions using angle-variable intersecting microbeam arrays (AVIMA). Additional doses are administered by varying the orientation of the beams. The method is enhanced by injecting stem cells into the injury site. One array or the AVIMA method is applied to ablate selected cells in a target volume associated with disease for palliative or curative effect. Atrial fibrillation is treated by irradiating the atrial wall to destroy myocardial cells while continuously rotating the subject.
Curating NASA's Extraterrestrial Samples - Past, Present, and Future
NASA Technical Reports Server (NTRS)
Allen, Carlton; Allton, Judith; Lofgren, Gary; Righter, Kevin; Zolensky, Michael
2011-01-01
Curation of extraterrestrial samples is the critical interface between sample return missions and the international research community. The Astromaterials Acquisition and Curation Office at the NASA Johnson Space Center (JSC) is responsible for curating NASA s extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with ". . . curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "documentation, preservation, preparation, and distribution of samples for research, education, and public outreach."
Curating NASA's Extraterrestrial Samples - Past, Present, and Future
NASA Technical Reports Server (NTRS)
Allen, Carlton; Allton, Judith; Lofgren, Gary; Righter, Kevin; Zolensky, Michael
2010-01-01
Curation of extraterrestrial samples is the critical interface between sample return missions and the international research community. The Astromaterials Acquisition and Curation Office at the NASA Johnson Space Center (JSC) is responsible for curating NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials," JSC is charged with ". . . curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for research, education, and public outreach.
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
NASA Technical Reports Server (NTRS)
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Biocuration at the Saccharomyces genome database.
Skrzypek, Marek S; Nash, Robert S
2015-08-01
Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. © 2015 Wiley Periodicals, Inc.
Biocuration at the Saccharomyces Genome Database
Skrzypek, Marek S.; Nash, Robert S.
2015-01-01
Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. PMID:25997651
The impact of comorbidity on cancer and its treatment.
Sarfati, Diana; Koczwara, Bogda; Jackson, Christopher
2016-07-01
Answer questions and earn CME/CNE Comorbidity is common among cancer patients and, with an aging population, is becoming more so. Comorbidity potentially affects the development, stage at diagnosis, treatment, and outcomes of people with cancer. Despite the intimate relationship between comorbidity and cancer, there is limited consensus on how to record, interpret, or manage comorbidity in the context of cancer, with the result that patients who have comorbidity are less likely to receive treatment with curative intent. Evidence in this area is lacking because of the frequent exclusion of patients with comorbidity from randomized controlled trials. There is evidence that some patients with comorbidity have potentially curative treatment unnecessarily modified, compromising optimal care. Patients with comorbidity have poorer survival, poorer quality of life, and higher health care costs. Strategies to address these issues include improving the evidence base for patients with comorbidity, further development of clinical tools to assist decision making, improved integration and coordination of care, and skill development for clinicians. CA Cancer J Clin 2016;66:337-350. © 2016 American Cancer Society. © 2016 American Cancer Society, Inc.
Real-time estimation of wildfire perimeters from curated crowdsourcing.
Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin
2016-04-11
Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available "curated" crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires.
ERIC Educational Resources Information Center
Shorish, Yasmeen
2012-01-01
This article describes the fundamental challenges to data curation, how these challenges may be compounded for smaller institutions, and how data management is an essential and manageable component of data curation. Data curation is often discussed within the confines of large research universities. As a result, master's and baccalaureate…
Data Curation: Improving Environmental Health Data Quality.
Yang, Lin; Li, Jiao; Hou, Li; Qian, Qing
2015-01-01
With the growing recognition of the influence of climate change on human health, scientists' attention to analyzing the relationship between meteorological factors and adverse health effects. However, the paucity of high quality integrated data is one of the great challenges, especially when scientific studies rely on data-intensive computing. This paper aims to design an appropriate curation process to address this problem. We present a data curation workflow that: (i) follows the guidance of DCC Curation Lifecycle Model; (ii) combines manual curation with automatic curation; (iii) and solves environmental health data curation problem. The workflow was applied to a medical knowledge service system and showed that it was capable of improving work efficiency and data quality.
Advancing Site-Based Data Curation for Geobiology: The Yellowstone Exemplar (Invited)
NASA Astrophysics Data System (ADS)
Palmer, C. L.; Fouke, B. W.; Rodman, A.; Choudhury, G. S.
2013-12-01
While advances in the management and archiving of scientific digital data are proceeding apace, there is an urgent need for data curation services to collect and provide access to high-value data fit for reuse. The Site-Based Data Curation (SBDC) project is establishing a framework of guidelines and processes for the curation of research data generated at scientifically significant sites. The project is a collaboration among information scientists, geobiologists, data archiving experts, and resource managers at Yellowstone National Park (YNP). Based on our previous work with the Data Conservancy on indicators of value for research data, several factors made YNP an optimal site for developing the SBDC framework, including unique environmental conditions, a permitting process for data collection, and opportunities for geo-located longitudinal data and multiple data sources for triangulation and context. Stakeholder analysis is informing the SBDC requirements, through engagement with geologists, geochemists, and microbiologists conducting research at YNP and personnel from the Yellowstone Center for Resources and other YNP units. To date, results include data value indicators specific to site-based research, minimum and optimal parameters for data description and metadata, and a strategy for organizing data around sampling events. New value indicators identified by the scientists include ease of access to park locations for verification and correction of data, and stable environmental conditions important for controlling variables. Researchers see high potential for data aggregated from the many individual investigators conducting permitted research at YNP, however reuse is clearly contingent on detailed and consistent sampling records. Major applications of SBDC include identifying connections in dynamic systems, spatial temporal synthesis, analyzing variability within and across geological features, tracking site evolution, assessing anomalies, and greater awareness of complementary research and opportunities for collaboration. Moreover, making evident the range of available YNP data will inform what should be explored next, even beyond YNP. Like funding agencies and policy makers, YNP researchers and resource managers are invested in data curation for strategic purposes related to the big picture and efficiency of science. For the scientists, YNP represents an ideal, protected natural system that can serve as an indicator of world events, and SBDC provides the ability to ask and answer broader research questions and leverage an extensive store of highly applicable data. SBDC affords YNP improved coordination and transparency of data collection activities, and easier identification of trends and connections across projects. SBDC capabilities that support broader inquiry and better coordination of scientific effort have clear implications for data curation at other research intensive sites, and may also inform how data systems can provide strategic assistance to science more generally.
ERIC Educational Resources Information Center
Mihailidis, Paul
2015-01-01
Despite the increased role of digital curation tools and platforms in the daily life of social network users, little research has focused on the competencies and dispositions that young people develop to effectively curate content online. This paper details the results of a mixed method study exploring the curation competencies of young people in…
NASA Technical Reports Server (NTRS)
McCubbin, Francis M.; Zeigler, Ryan A.
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Evans, C. A.; Fries, M. D.; Harrington, A. D.; Regberg, A. B.; Snead, C. J.; Zeigler, R. A.
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for re-search, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Allton, J. H.; Barnes, J. J.; Boyce, J. W.; Burton, A. S.; Draper, D. S.; Evans, C. A.; Fries, M. D.; Jones, J. H.; Keller, L. P.;
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. JSC presently curates 9 different astromaterials collections: (1) Apollo samples, (2) LUNA samples, (3) Antarctic meteorites, (4) Cosmic dust particles, (5) Microparticle Impact Collection [formerly called Space Exposed Hardware], (6) Genesis solar wind, (7) Star-dust comet Wild-2 particles, (8) Stardust interstellar particles, and (9) Hayabusa asteroid Itokawa particles. In addition, the next missions bringing carbonaceous asteroid samples to JSC are Hayabusa 2/ asteroid Ryugu and OSIRIS-Rex/ asteroid Bennu, in 2021 and 2023, respectively. The Hayabusa 2 samples are provided as part of an international agreement with JAXA. The NASA Curation Office plans for the requirements of future collections in an "Advanced Curation" program. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. Here we review the science value and sample curation needs of some potential targets for sample return missions over the next 35 years.
The Importance of Contamination Knowledge in Curation - Insights into Mars Sample Return
NASA Technical Reports Server (NTRS)
Harrington, A. D.; Calaway, M. J.; Regberg, A. B.; Mitchell, J. L.; Fries, M. D.; Zeigler, R. A.; McCubbin, F. M.
2018-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC), in Houston, TX (henceforth Curation Office) manages the curation of extraterrestrial samples returned by NASA missions and shared collections from international partners, preserving their integrity for future scientific study while providing the samples to the international community in a fair and unbiased way. The Curation Office also curates flight and non-flight reference materials and other materials from spacecraft assembly (e.g., lubricants, paints and gases) of sample return missions that would have the potential to cross-contaminate a present or future NASA astromaterials collection.
NASA Technical Reports Server (NTRS)
Fletcher, L. A.; Allen, C. C.; Bastien, R.
2008-01-01
NASA's Johnson Space Center (JSC) and the Astromaterials Curator are charged by NPD 7100.10D with the curation of all of NASA s extraterrestrial samples, including those from future missions. This responsibility includes the development of new sample handling and preparation techniques; therefore, the Astromaterials Curator must begin developing procedures to preserve, prepare and ship samples at sub-freezing temperatures in order to enable future sample return missions. Such missions might include the return of future frozen samples from permanently-shadowed lunar craters, the nuclei of comets, the surface of Mars, etc. We are demonstrating the ability to curate samples under cold conditions by designing, installing and testing a cold curation glovebox. This glovebox will allow us to store, document, manipulate and subdivide frozen samples while quantifying and minimizing contamination throughout the curation process.
ERIC Educational Resources Information Center
McCoy, Floyd W.
1977-01-01
Reports on a recent meeting of marine curators in which data dissemination, standardization of marine curating techniques and methods, responsibilities of curators, funding problems, and sampling equipment were the main areas of discussion. A listing of the major deep sea sample collections in the United States is also provided. (CP)
Can we replace curation with information extraction software?
Karp, Peter D
2016-01-01
Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.
The Role of the Curator in Modern Hospitals: A Transcontinental Perspective.
Moss, Hilary; O'Neill, Desmond
2016-12-13
This paper explores the role of the curator in hospitals. The arts play a significant role in every society; however, recent studies indicate a neglect of the aesthetic environment of healthcare. This international study explores the complex role of the curator in modern hospitals. Semi-structured interviews were conducted with ten arts specialists in hospitals across five countries and three continents for a qualitative, phenomenological study. Five themes arose from the data: (1) Patient involvement and influence on the arts programme in hospital (2) Understanding the role of the curator in hospital (3) Influences on arts programming in hospital (4) Types of arts programmes (5) Limitations to effective curation in hospital. Recommendations arising from the research included recognition of the specialised role of the curator in hospitals; building positive links with clinical staff to effect positive hospital arts programmes and increasing formal involvement of patients in arts planning in hospital. Hospital curation can be a vibrant arena for arts development, and the role of the hospital curator is a ground-breaking specialist role that can bring benefits to hospital life. The role of curator in hospital deserves to be supported and developed by both the arts and health sectors.
Curating NASA's Past, Present, and Future Astromaterial Sample Collections
NASA Technical Reports Server (NTRS)
Zeigler, R. A.; Allton, J. H.; Evans, C. A.; Fries, M. D.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.; Zolensky, M.; Stansbery, E. K.
2016-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (hereafter JSC curation) is responsible for curating all of NASA's extraterrestrial samples. JSC presently curates 9 different astromaterials collections in seven different clean-room suites: (1) Apollo Samples (ISO (International Standards Organization) class 6 + 7); (2) Antarctic Meteorites (ISO 6 + 7); (3) Cosmic Dust Particles (ISO 5); (4) Microparticle Impact Collection (ISO 7; formerly called Space-Exposed Hardware); (5) Genesis Solar Wind Atoms (ISO 4); (6) Stardust Comet Particles (ISO 5); (7) Stardust Interstellar Particles (ISO 5); (8) Hayabusa Asteroid Particles (ISO 5); (9) OSIRIS-REx Spacecraft Coupons and Witness Plates (ISO 7). Additional cleanrooms are currently being planned to house samples from two new collections, Hayabusa 2 (2021) and OSIRIS-REx (2023). In addition to the labs that house the samples, we maintain a wide variety of infra-structure facilities required to support the clean rooms: HEPA-filtered air-handling systems, ultrapure dry gaseous nitrogen systems, an ultrapure water system, and cleaning facilities to provide clean tools and equipment for the labs. We also have sample preparation facilities for making thin sections, microtome sections, and even focused ion-beam sections. We routinely monitor the cleanliness of our clean rooms and infrastructure systems, including measurements of inorganic or organic contamination, weekly airborne particle counts, compositional and isotopic monitoring of liquid N2 deliveries, and daily UPW system monitoring. In addition to the physical maintenance of the samples, we track within our databases the current and ever changing characteristics (weight, location, etc.) of more than 250,000 individually numbered samples across our various collections, as well as more than 100,000 images, and countless "analog" records that record the sample processing records of each individual sample. JSC Curation is co-located with JSC's Astromaterials Research Office, which houses a world-class suite of analytical instrumentation and scientists. We leverage these labs and personnel to better curate the samples. Part of the cu-ration process is planning for the future, and we refer to these planning efforts as "advanced curation". Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envi-sioned by NASA exploration goals. We are (and have been) planning for future cu-ration, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, and curation of organically- and biologically-sensitive samples.
Wang, Yanchao; Sunderraman, Rajshekhar
2006-01-01
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.
Using random forests for assistance in the curation of G-protein coupled receptor databases.
Shkurin, Aleksei; Vellido, Alfredo
2017-08-18
Biology is experiencing a gradual but fast transformation from a laboratory-centred science towards a data-centred one. As such, it requires robust data engineering and the use of quantitative data analysis methods as part of database curation. This paper focuses on G protein-coupled receptors, a large and heterogeneous super-family of cell membrane proteins of interest to biology in general. One of its families, Class C, is of particular interest to pharmacology and drug design. This family is quite heterogeneous on its own, and the discrimination of its several sub-families is a challenging problem. In the absence of known crystal structure, such discrimination must rely on their primary amino acid sequences. We are interested not as much in achieving maximum sub-family discrimination accuracy using quantitative methods, but in exploring sequence misclassification behavior. Specifically, we are interested in isolating those sequences showing consistent misclassification, that is, sequences that are very often misclassified and almost always to the same wrong sub-family. Random forests are used for this analysis due to their ensemble nature, which makes them naturally suited to gauge the consistency of misclassification. This consistency is here defined through the voting scheme of their base tree classifiers. Detailed consistency results for the random forest ensemble classification were obtained for all receptors and for all data transformations of their unaligned primary sequences. Shortlists of the most consistently misclassified receptors for each subfamily and transformation, as well as an overall shortlist including those cases that were consistently misclassified across transformations, were obtained. The latter should be referred to experts for further investigation as a data curation task. The automatic discrimination of the Class C sub-families of G protein-coupled receptors from their unaligned primary sequences shows clear limits. This study has investigated in some detail the consistency of their misclassification using random forest ensemble classifiers. Different sub-families have been shown to display very different discrimination consistency behaviors. The individual identification of consistently misclassified sequences should provide a tool for quality control to GPCR database curators.
Identifying the Functional Requirements for an Arizona Astronomy Data Hub (AADH)
NASA Astrophysics Data System (ADS)
Stahlman, G.; Heidorn, P. B.
2015-12-01
Astronomy data represent a curation challenge for information managers, as well as for astronomers. Extracting knowledge from these heterogeneous and complex datasets is particularly complicated and requires both interdisciplinary and domain expertise to accomplish true curation, with an overall goal of facilitating reproducible science through discoverability and persistence. A group of researchers and professional staff at the University of Arizona held several meetings during the spring of 2015 about astronomy data and the role of the university in curation of that data. The group decided that it was critical to obtain a broader consensus on the needs of the community. With assistance from a Start for Success grant provided by the University of Arizona Office of Research and Discovery and funding from the American Astronomical Society (AAS), a workshop was held in early July 2015, with 28 participants plus 4 organizers in attendance. Representing University researchers as well as astronomical facilities and a scholarly society, the group verified that indeed there is a problem with the long-term curation of some astronomical data not associated with major facilities, and that a repository or "data hub" with the correct functionality could facilitate research and the preservation and use of astronomy data. The workshop members also identified a set of next steps, including the identification of possible data and metadata to be included in the Hub. The participants further helped to identify additional information that must be gathered before construction of the AADH could begin, including identifying significant datasets that do not currently have sufficient preservation and dissemination infrastructure, as well as some data associated with journal publications and the broader context of the data beyond that directly published in the journals. Workshop participants recommended that a set of grant proposal should be developed that ensures community buy-in and participation. The project should be developed in an agile, incremental manner that will allow consistent community growth from the early stages of the project, building on existing iPlant infrastructure (www.iplantcollaborative.org) initially developed for the biology community.
Preparing to Receive and Handle Martian Samples When They Arrive on Earth
NASA Technical Reports Server (NTRS)
McCubbin, Francis M.
2017-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F+ derivative NPR 'Curation of Extraterrestrial Materials', JSC is charged with 'The curation of all extraterrestrial material under NASA control, including future NASA missions. 'The Directive goes on to define Curation as including'...documentation, preservation, preparation, and distribution of samples for research, education, and public outreach."
Kim, Jin Cheon; Lee, Jong Lyul; Alotaibi, Abdulrahman Muaod; Yoon, Yong Sik; Kim, Chan Wook; Park, In Ja
2017-08-01
Few investigations of robot-assisted intersphincteric resection (ISR) are presently available to support this procedure as a safe and efficient procedure. We aimed to evaluate the utility of robot-assisted ISR by comparison between ISR and abdominoperineal resection (APR) using both robot-assisted and open approaches. The 558 patients with lower rectal cancer (LRC) who underwent curative operation was enrolled between July 2010 and June 2015 to perform either by robot-assisted (ISR vs. APR = 310 vs. 34) or open approaches (144 vs. 70). Perioperative and functional outcomes including urogenital and anorectal dysfunctions were measured. Recurrence and survival were examined in 216 patients in which >3 years had elapsed after the operation. The robot-assisted approach was the most significant parameter to determine ISR achievement among potent parameters (OR = 3.467, 95% CI = 2.095-5.738, p < 0.001). Early surgical complications occurred more frequently in the open ISR group (16 vs. 7.7%, p = 0.01). The voiding and male sexual dysfunctions were significantly more frequent in the open ISR (p < 0.05). The fecal incontinence and lifestyle alteration score was greater in the open ISR than in the robot-assisted ISR at 12 and 24 months, respectively (p < 0.05). However, the 3-year cumulative rates of local recurrence and survival did not differ between the two groups. The current procedure of robot-assisted ISR replaced a significant portion of APR to achieve successful SSO via mostly transabdominal approach and double-stapled anastomosis. The robot-assisted ISR with minimal invasiveness might be a help to reduce anorectal and urogenital dysfunctions.
Liang, Weiqiang; Yao, Yuanyuan; Huang, Zixian; Chen, Yuhong; Ji, Chenyang; Zhang, Jinming
2016-07-01
The purpose of this study was to evaluate the clinical application of individual craniofacial bone fabrications using computer-assisted design (CAD)-computer-assisted manufacturing technology for the reconstruction of craniofacial bone defects. A total of 8 patients diagnosed with craniofacial bone defects were enrolled in this study between May 2007 and August 2010. After computed tomography scans were obtained, the patients were fitted with artificial bone that was created using CAD software, rapid prototyping technology, and epoxy-methyl acrylate resin and hydroxyapatite materials. The fabrication was fixed to the defect area with titanium screws, and soft tissue defects were repaired if necessary. The fabrications were precisely fixed to the defect areas, and all wounds healed well without any serious complications except for 1 case with intraoral incision dehiscence, which required further treatment. Postoperative curative effects were retrospectively observed after 6 to 48 months, acceptable anatomic and cosmetic outcomes were obtained, and no rejections or other complications occurred. The use of CAD-computer-assisted manufacturing technology-assisted epoxy-methyl acrylate resin and hydroxyapatite composite artificial bone to treat patients with craniofacial bone defects could enable the precise reconstruction of these defects and obtain good anatomic and cosmetic outcomes. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases
Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning
2014-01-01
IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451
Biocuration workflows and text mining: overview of the BioCreative 2012 Workshop Track II.
Lu, Zhiyong; Hirschman, Lynette
2012-01-01
Manual curation of data from the biomedical literature is a rate-limiting factor for many expert curated databases. Despite the continuing advances in biomedical text mining and the pressing needs of biocurators for better tools, few existing text-mining tools have been successfully integrated into production literature curation systems such as those used by the expert curated databases. To close this gap and better understand all aspects of literature curation, we invited submissions of written descriptions of curation workflows from expert curated databases for the BioCreative 2012 Workshop Track II. We received seven qualified contributions, primarily from model organism databases. Based on these descriptions, we identified commonalities and differences across the workflows, the common ontologies and controlled vocabularies used and the current and desired uses of text mining for biocuration. Compared to a survey done in 2009, our 2012 results show that many more databases are now using text mining in parts of their curation workflows. In addition, the workshop participants identified text-mining aids for finding gene names and symbols (gene indexing), prioritization of documents for curation (document triage) and ontology concept assignment as those most desired by the biocurators. DATABASE URL: http://www.biocreative.org/tasks/bc-workshop-2012/workflow/.
The curation of genetic variants: difficulties and possible solutions.
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-12-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.
The Curation of Genetic Variants: Difficulties and Possible Solutions
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-01-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. PMID:23317699
Nair, C K; Patil, V M; Raghavan, V; Babu, S; Nayanar, S
2015-01-01
There is limited data from India regarding elderly non-Hodgkin's lymphomas (NHL) patients. Hence, this audit was planned to study the clinic-pathological features and treatment outcomes in elderly NHL patients. Retrospective analysis of all NHL patients above age of 59 years treated at the author's institute, between December 2010 and December 2013 was done. Case records were reviewed for baseline details, staging details, prognostic factors, treatment delivered, response, toxicity and efficacy. SPSS version 16 (IBM, Newyork) was used for analysis. Descriptive statistics was performed. Kaplan-Meir survival analysis was done for estimation of progression-free survival (PFS) and overall survival (OS). Univariate analysis was done for identifying factors affecting PFS and OS. Out of 141 NHL patients, 67 patients were identified subjected to the inclusion criteria. The median age was 68 years (60-92). Majority were B-cell NHL (86.6%). The commonest subtype in B-cell was diffuse large B-cell lymphoma (55.2%). Fifty-four patients took treatment. The treatment intent was curative in 41 patients (61.2%). Among the patients receiving curative treatment, 16 patients couldn't receive treatment in accordance with NCCN guidelines due to financial issues. Two years PFS was 55%. Two years PFS for B-cell NHL and T-cell NHL were 55% and 50% respectively (P = 0.982). Two years PFS for standard Rx and nonstandard Rx were 62% and 50% respectively, but it didn't reach statistical significance (P = 0.537). Two years OS for the entire cohort was 84%. Standard treatment in accordance with guidelines can be delivered in elderly patients irrespective of age. There is a need for creating financial assistance for patients, so that potentially curative treatments are not denied.
Investigating Astromaterials Curation Applications for Dexterous Robotic Arms
NASA Technical Reports Server (NTRS)
Snead, C. J.; Jang, J. H.; Cowden, T. R.; McCubbin, F. M.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center is currently investigating tools and methods that will enable the curation of future astromaterials collections. Size and temperature constraints for astromaterials to be collected by current and future proposed missions will require the development of new robotic sample and tool handling capabilities. NASA Curation has investigated the application of robot arms in the past, and robotic 3-axis micromanipulators are currently in use for small particle curation in the Stardust and Cosmic Dust laboratories. While 3-axis micromanipulators have been extremely successful for activities involving the transfer of isolated particles in the 5-20 micron range (e.g. from microscope slide to epoxy bullet tip, beryllium SEM disk), their limited ranges of motion and lack of yaw, pitch, and roll degrees of freedom restrict their utility in other applications. For instance, curators removing particles from cosmic dust collectors by hand often employ scooping and rotating motions to successfully free trapped particles from the silicone oil coatings. Similar scooping and rotating motions are also employed when isolating a specific particle of interest from an aliquot of crushed meteorite. While cosmic dust curators have been remarkably successful with these kinds of particle manipulations using handheld tools, operator fatigue limits the number of particles that can be removed during a given extraction session. The challenges for curation of small particles will be exacerbated by mission requirements that samples be processed in N2 sample cabinets (i.e. gloveboxes). We have been investigating the use of compact robot arms to facilitate sample handling within gloveboxes. Six-axis robot arms potentially have applications beyond small particle manipulation. For instance, future sample return missions may involve biologically sensitive astromaterials that can be easily compromised by physical interaction with a curator; other potential future returned samples may require cryogenic curation. Robot arms may be combined with high resolution cameras within a sample cabinet and controlled remotely by curator. Sophisticated robot arm and hand combination systems can be programmed to mimic the movements of a curator wearing a data glove; successful implementation of such a system may ultimately allow a curator to virtually operate in a nitrogen, cryogenic, or biologically sensitive environment with dexterity comparable to that of a curator physically handling samples in a glove box.
The Astromaterials X-Ray Computed Tomography Laboratory at Johnson Space Center
NASA Technical Reports Server (NTRS)
Zeigler, R. A.; Coleff, D. M.; McCubbin, F. M.
2017-01-01
The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (hereafter JSC curation) is the past, present, and future home of all of NASA's astromaterials sample collections. JSC curation currently houses all or part of nine different sample collections: (1) Apollo samples (1969), (2) Lunar samples (1972), (3) Antarctic meteorites (1976), (4) Cosmic Dust particles (1981), (5) Microparticle Impact Collection (1985), (6) Genesis solar wind atoms (2004); (7) Stardust comet Wild-2 particles (2006), (8) Stardust interstellar particles (2006), and (9) Hayabusa asteroid Itokawa particles (2010). Each sample collection is housed in a dedicated clean room, or suite of clean rooms, that is tailored to the requirements of that sample collection. Our primary goals are to maintain the long-term integrity of the samples and ensure that the samples are distributed for scientific study in a fair, timely, and responsible manner, thus maximizing the return on each sample. Part of the curation process is planning for the future, and we also perform fundamental research in advanced curation initiatives. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of sample collections, or getting new results from existing sample collections [2]. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, and curation of organically- and biologically-sensitive samples. As part of these advanced curation efforts we are augmenting our analytical facilities as well. A micro X-Ray computed tomography (micro-XCT) laboratory dedicated to the study of astromaterials will be coming online this spring within the JSC Curation office, and we plan to add additional facilities that will enable nondestructive (or minimally-destructive) analyses of astromaterials in the near future (micro-XRF, confocal imaging Raman Spectroscopy). These facilities will be available to: (1) develop sample handling and storage techniques for future sample return missions; (2) be utilized by PET for future sample return missions; (3) be used for retroactive PET (Positron Emission Tomography)-style analyses of our existing collections; and (4) for periodic assessments of the existing sample collections. Here we describe the new micro-XCT system, as well as some of the ongoing or anticipated applications of the instrument.
miRiaD: A Text Mining Tool for Detecting Associations of microRNAs with Diseases.
Gupta, Samir; Ross, Karen E; Tudor, Catalina O; Wu, Cathy H; Schmidt, Carl J; Vijay-Shanker, K
2016-04-29
MicroRNAs are increasingly being appreciated as critical players in human diseases, and questions concerning the role of microRNAs arise in many areas of biomedical research. There are several manually curated databases of microRNA-disease associations gathered from the biomedical literature; however, it is difficult for curators of these databases to keep up with the explosion of publications in the microRNA-disease field. Moreover, automated literature mining tools that assist manual curation of microRNA-disease associations currently capture only one microRNA property (expression) in the context of one disease (cancer). Thus, there is a clear need to develop more sophisticated automated literature mining tools that capture a variety of microRNA properties and relations in the context of multiple diseases to provide researchers with fast access to the most recent published information and to streamline and accelerate manual curation. We have developed miRiaD (microRNAs in association with Disease), a text-mining tool that automatically extracts associations between microRNAs and diseases from the literature. These associations are often not directly linked, and the intermediate relations are often highly informative for the biomedical researcher. Thus, miRiaD extracts the miR-disease pairs together with an explanation for their association. We also developed a procedure that assigns scores to sentences, marking their informativeness, based on the microRNA-disease relation observed within the sentence. miRiaD was applied to the entire Medline corpus, identifying 8301 PMIDs with miR-disease associations. These abstracts and the miR-disease associations are available for browsing at http://biotm.cis.udel.edu/miRiaD . We evaluated the recall and precision of miRiaD with respect to information of high interest to public microRNA-disease database curators (expression and target gene associations), obtaining a recall of 88.46-90.78. When we expanded the evaluation to include sentences with a wide range of microRNA-disease information that may be of interest to biomedical researchers, miRiaD also performed very well with a F-score of 89.4. The informativeness ranking of sentences was evaluated in terms of nDCG (0.977) and correlation metrics (0.678-0.727) when compared to an annotator's ranked list. miRiaD, a high performance system that can capture a wide variety of microRNA-disease related information, extends beyond the scope of existing microRNA-disease resources. It can be incorporated into manual curation pipelines and serve as a resource for biomedical researchers interested in the role of microRNAs in disease. In our ongoing work we are developing an improved miRiaD web interface that will facilitate complex queries about microRNA-disease relationships, such as "In what diseases does microRNA regulation of apoptosis play a role?" or "Is there overlap in the sets of genes targeted by microRNAs in different types of dementia?"."
Experiments with an EVA Assistant Robot
NASA Technical Reports Server (NTRS)
Burridge, Robert R.; Graham, Jeffrey; Shillcutt, Kim; Hirsh, Robert; Kortenkamp, David
2003-01-01
Human missions to the Moon or Mars will likely be accompanied by many useful robots that will assist in all aspects of the mission, from construction to maintenance to surface exploration. Such robots might scout terrain, carry tools, take pictures, curate samples, or provide status information during a traverse. At NASA/JSC, the EVA Robotic Assistant (ERA) project has developed a robot testbed for exploring the issues of astronaut-robot interaction. Together with JSC's Advanced Spacesuit Lab, the ERA team has been developing robot capabilities and testing them with space-suited test subjects at planetary surface analog sites. In this paper, we describe the current state of the ERA testbed and two weeks of remote field tests in Arizona in September 2002. A number of teams with a broad range of interests participated in these experiments to explore different aspects of what must be done to develop a program for robotic assistance to surface EVA. Technologies explored in the field experiments included a fuel cell, new mobility platform and manipulator, novel software and communications infrastructure for multi-agent modeling and planning, a mobile science lab, an "InfoPak" for monitoring the spacesuit, and delayed satellite communication to a remote operations team. In this paper, we will describe this latest round of field tests in detail.
Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.
Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J
2014-01-01
Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.
Gloyn, Liz; Crewe, Vicky; King, Laura; Woodham, Anna
2018-01-01
Using an interdisciplinary research methodology across three archaeological and historical case studies, this article explores “family archives.” Four themes illustrate how objects held in family archives, curation practices, and intergenerational narratives reinforce a family’s sense of itself: people–object interactions, gender, socialization and identity formation, and the “life course.” These themes provide a framework for professional archivists to assist communities and individuals working with their own family archives. We argue that the family archive, broadly defined, encourages a more egalitarian approach to history. We suggest a multiperiod analysis draws attention to historical forms of knowledge and meaning-making practices over time. PMID:29593371
[Rescue cryotherapy for prostate cancer after radiotherapy].
García, Erique Lledó; Amo, Felipe Herranz; San Segundo, Carmen González; Fagundo, Eva Paños; Escudero, Roberto Molina; Alonso, Adrian Husillos; Piniés, Gabriel Ogaya; Rascón, Jose Jara; Fernández, Carlos Hernández
2012-01-01
Radical Radiotherapy constitutes a useful therapeutic option for localized prostate cancer. Almost one third of prostate cancer patients choose this alternative to treat the disease. Despite modifications in the technique as intensity modulation, 3D conformational radiotherapy or computer-assisted brachytherapy, a significant percentage of these patients will show an increase in PSA values after radiation. Local relapse without distant disease and PSA less than 10 ng/ml are candidates for salvage therapy. Cryotherapy has already become a curative treatment option in this group of patients. Recent technological as well as surgical advances in salvage-cryotherapy have reduced dramatically complications and progressively increase the interest on this alternative.
The Ties That Bind: Materiality, Identity, and the Life Course in the "Things" Families Keep.
Gloyn, Liz; Crewe, Vicky; King, Laura; Woodham, Anna
2018-04-01
Using an interdisciplinary research methodology across three archaeological and historical case studies, this article explores "family archives." Four themes illustrate how objects held in family archives, curation practices, and intergenerational narratives reinforce a family's sense of itself: people-object interactions, gender, socialization and identity formation, and the "life course." These themes provide a framework for professional archivists to assist communities and individuals working with their own family archives. We argue that the family archive, broadly defined, encourages a more egalitarian approach to history. We suggest a multiperiod analysis draws attention to historical forms of knowledge and meaning-making practices over time.
BioCreative III interactive task: an overview
2011-01-01
Background The BioCreative challenge evaluation is a community-wide effort for evaluating text mining and information extraction systems applied to the biological domain. The biocurator community, as an active user of biomedical literature, provides a diverse and engaged end user group for text mining tools. Earlier BioCreative challenges involved many text mining teams in developing basic capabilities relevant to biological curation, but they did not address the issues of system usage, insertion into the workflow and adoption by curators. Thus in BioCreative III (BC-III), the InterActive Task (IAT) was introduced to address the utility and usability of text mining tools for real-life biocuration tasks. To support the aims of the IAT in BC-III, involvement of both developers and end users was solicited, and the development of a user interface to address the tasks interactively was requested. Results A User Advisory Group (UAG) actively participated in the IAT design and assessment. The task focused on gene normalization (identifying gene mentions in the article and linking these genes to standard database identifiers), gene ranking based on the overall importance of each gene mentioned in the article, and gene-oriented document retrieval (identifying full text papers relevant to a selected gene). Six systems participated and all processed and displayed the same set of articles. The articles were selected based on content known to be problematic for curation, such as ambiguity of gene names, coverage of multiple genes and species, or introduction of a new gene name. Members of the UAG curated three articles for training and assessment purposes, and each member was assigned a system to review. A questionnaire related to the interface usability and task performance (as measured by precision and recall) was answered after systems were used to curate articles. Although the limited number of articles analyzed and users involved in the IAT experiment precluded rigorous quantitative analysis of the results, a qualitative analysis provided valuable insight into some of the problems encountered by users when using the systems. The overall assessment indicates that the system usability features appealed to most users, but the system performance was suboptimal (mainly due to low accuracy in gene normalization). Some of the issues included failure of species identification and gene name ambiguity in the gene normalization task leading to an extensive list of gene identifiers to review, which, in some cases, did not contain the relevant genes. The document retrieval suffered from the same shortfalls. The UAG favored achieving high performance (measured by precision and recall), but strongly recommended the addition of features that facilitate the identification of correct gene and its identifier, such as contextual information to assist in disambiguation. Discussion The IAT was an informative exercise that advanced the dialog between curators and developers and increased the appreciation of challenges faced by each group. A major conclusion was that the intended users should be actively involved in every phase of software development, and this will be strongly encouraged in future tasks. The IAT Task provides the first steps toward the definition of metrics and functional requirements that are necessary for designing a formal evaluation of interactive curation systems in the BioCreative IV challenge. PMID:22151968
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
NASA Astrophysics Data System (ADS)
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return, Lunar South Pole-Aitken Basin Sample Return, and Comet Surface Sample Return, all of which were named in the NRC Planetary Science Decadal Survey 2013-2022. We are fully committed to pushing the boundaries of curation protocol as humans continue to push the boundaries of space exploration and sample return. However, to improve our ability to curate astromaterials collections of the future and to provide maximum protection to any returned samples, it is imperative that curation involvement commences at the time of mission conception. When curation involvement is at the ground floor of mission planning, it provides a mechanism by which the samples can be protected against project-level decisions that could undermine the scientific value of the re-turned samples. A notable example of one of the bene-fits of early curation involvement in mission planning is in the acquisition of contamination knowledge (CK). CK capture strategies are designed during the initial planning stages of a sample return mission, and they are to be implemented during all phases of the mission from assembly, test, and launch operations (ATLO), through cruise and mission operations, to the point of preliminary examination after Earth return. CK is captured by witness materials and coupons exposed to the contamination environment in the assembly labs and on the space craft during launch, cruise, and operations. These materials, along with any procedural blanks and returned flight-hardware, represent our CK capture for the returned samples and serves as a baseline from which analytical results can be vetted. Collection of CK is a critical part of being able to conduct and interpret data from organic geochemistry and biochemistry investigations of returned samples. The CK samples from a given mission are treated as part of the sample collection of that mission, hence they are part of the permanent archive that is maintained by the NASA curation Office. We are in the midst of collecting witness plates and coupons for the OSIRIS-REx mission, and we are in the planning stages for similar activities for the Mars 2020 rover mission, which is going to be the first step in a multi-stage campaign to return martian samples to Earth. Concluding Remarks: The return of every extraterrestrial sample is a scientific investment, and the CK samples and any procedural blanks represent an insurance policy against imperfections in the sample-collection and sample-return process. The curation facilities and personnel are the primary managers of that investment, and the scientific community, at large, is the beneficiary. The NASA Curation Office at JSC has the assigned task of maintaining the long-term integrity of all of NASA's astromaterials and ensuring that the samples are distributed for scientific study in a fair, timely, and responsible manner. It is only through this openness and global collaboration in the study of astromaterials that the return on our scientific investments can be maximized. For information on requesting samples and becoming part of the global study of astromaterials, please visit curator.jsc.nasa.gov References: [1] Mangus, S. & Larsen, W. (2004) NASA/CR-2004-208938, NASA, Washington, DC. [2] Allen, C. et al., (2011) Chemie Der Erde-Geochemistry, 71, 1-20. [3] McCubbin, F.M. et al., (2016) 47th LPSC #2668. [4] Zeigler, R.A. et al., (2014) 45th LPSC #2665.
NASA Astrophysics Data System (ADS)
Hou, C. Y.; Dattore, R.; Peng, G. S.
2014-12-01
The National Center for Atmospheric Research's Global Climate Four-Dimensional Data Assimilation (CFDDA) Hourly 40km Reanalysis dataset is a dynamically downscaled dataset with high temporal and spatial resolution. The dataset contains three-dimensional hourly analyses in netCDF format for the global atmospheric state from 1985 to 2005 on a 40km horizontal grid (0.4°grid increment) with 28 vertical levels, providing good representation of local forcing and diurnal variation of processes in the planetary boundary layer. This project aimed to make the dataset publicly available, accessible, and usable in order to provide a unique resource to allow and promote studies of new climate characteristics. When the curation project started, it had been five years since the data files were generated. Also, although the Principal Investigator (PI) had generated a user document at the end of the project in 2009, the document had not been maintained. Furthermore, the PI had moved to a new institution, and the remaining team members were reassigned to other projects. These factors made data curation in the areas of verifying data quality, harvest metadata descriptions, documenting provenance information especially challenging. As a result, the project's curation process found that: Data curator's skill and knowledge helped make decisions, such as file format and structure and workflow documentation, that had significant, positive impact on the ease of the dataset's management and long term preservation. Use of data curation tools, such as the Data Curation Profiles Toolkit's guidelines, revealed important information for promoting the data's usability and enhancing preservation planning. Involving data curators during each stage of the data curation life cycle instead of at the end could improve the curation process' efficiency. Overall, the project showed that proper resources invested in the curation process would give datasets the best chance to fulfill their potential to help with new climate pattern discovery.
How should the completeness and quality of curated nanomaterial data be evaluated?
NASA Astrophysics Data System (ADS)
Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.
2016-05-01
Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated?Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? Electronic supplementary information (ESI) available: (1) Detailed information regarding issues raised in the main text; (2) original survey responses. See DOI: 10.1039/c5nr08944a
Directly e-mailing authors of newly published papers encourages community curation
Bunt, Stephanie M.; Grumbling, Gary B.; Field, Helen I.; Marygold, Steven J.; Brown, Nicholas H.; Millburn, Gillian H.
2012-01-01
Much of the data within Model Organism Databases (MODs) comes from manual curation of the primary research literature. Given limited funding and an increasing density of published material, a significant challenge facing all MODs is how to efficiently and effectively prioritize the most relevant research papers for detailed curation. Here, we report recent improvements to the triaging process used by FlyBase. We describe an automated method to directly e-mail corresponding authors of new papers, requesting that they list the genes studied and indicate (‘flag’) the types of data described in the paper using an online tool. Based on the author-assigned flags, papers are then prioritized for detailed curation and channelled to appropriate curator teams for full data extraction. The overall response rate has been 44% and the flagging of data types by authors is sufficiently accurate for effective prioritization of papers. In summary, we have established a sustainable community curation program, with the result that FlyBase curators now spend less time triaging and can devote more effort to the specialized task of detailed data extraction. Database URL: http://flybase.org/ PMID:22554788
Astromaterials Curation Online Resources for Principal Investigators
NASA Technical Reports Server (NTRS)
Todd, Nancy S.; Zeigler, Ryan A.; Mueller, Lina
2017-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center curates all of NASA's extraterrestrial samples, the most extensive set of astromaterials samples available to the research community worldwide. The office allocates 1500 individual samples to researchers and students each year and has served the planetary research community for 45+ years. The Astromaterials Curation office provides access to its sample data repository and digital resources to support the research needs of sample investigators and to aid in the selection and request of samples for scientific study. These resources can be found on the Astromaterials Acquisition and Curation website at https://curator.jsc.nasa.gov. To better serve our users, we have engaged in several activities to enhance the data available for astromaterials samples, to improve the accessibility and performance of the website, and to address user feedback. We havealso put plans in place for continuing improvements to our existing data products.
Gene regulation knowledge commons: community action takes care of DNA binding transcription factors
Tripathi, Sushil; Vercruysse, Steven; Chawla, Konika; Christie, Karen R.; Blake, Judith A.; Huntley, Rachael P.; Orchard, Sandra; Hermjakob, Henning; Thommesen, Liv; Lægreid, Astrid; Kuiper, Martin
2016-01-01
A large gap remains between the amount of knowledge in scientific literature and the fraction that gets curated into standardized databases, despite many curation initiatives. Yet the availability of comprehensive knowledge in databases is crucial for exploiting existing background knowledge, both for designing follow-up experiments and for interpreting new experimental data. Structured resources also underpin the computational integration and modeling of regulatory pathways, which further aids our understanding of regulatory dynamics. We argue how cooperation between the scientific community and professional curators can increase the capacity of capturing precise knowledge from literature. We demonstrate this with a project in which we mobilize biological domain experts who curate large amounts of DNA binding transcription factors, and show that they, although new to the field of curation, can make valuable contributions by harvesting reported knowledge from scientific papers. Such community curation can enhance the scientific epistemic process. Database URL: http://www.tfcheckpoint.org PMID:27270715
PPDMs-a resource for mapping small molecule bioactivities from ChEMBL to Pfam-A protein domains.
Kruger, Felix A; Gaulton, Anna; Nowotka, Michal; Overington, John P
2015-03-01
PPDMs is a resource that maps small molecule bioactivities to protein domains from the Pfam-A collection of protein families. Small molecule bioactivities mapped to protein domains add important precision to approaches that use protein sequence searches alignments to assist applications in computational drug discovery and systems and chemical biology. We have previously proposed a mapping heuristic for a subset of bioactivities stored in ChEMBL with the Pfam-A domain most likely to mediate small molecule binding. We have since refined this mapping using a manual procedure. Here, we present a resource that provides up-to-date mappings and the possibility to review assigned mappings as well as to participate in their assignment and curation. We also describe how mappings provided through the PPDMs resource are made accessible through the main schema of the ChEMBL database. The PPDMs resource and curation interface is available at https://www.ebi.ac.uk/chembl/research/ppdms/pfam_maps. The source-code for PPDMs is available under the Apache license at https://github.com/chembl/pfam_maps. Source code is available at https://github.com/chembl/pfam_map_loader to demonstrate the integration process with the main schema of ChEMBL. © The Author 2014. Published by Oxford University Press.
Hepatoprotective Effects of Chinese Medicine Herbs Decoction on Liver Cirrhosis in Rats
Lim, Tong-Hye; Nor-Amdan, Nur-Asyura
2017-01-01
Hepatoprotective and curative activities of aqueous extract of decoction containing 10 Chinese medicinal herbs (HPE-XA-08) were evaluated in Sprague–Dawley albino rats with liver damage induced by thioacetamide (TAA). These activities were assessed by investigating the liver enzymes level and also histopathology investigation. Increases in alkaline phosphatase (ALP) and gamma-glutamyl transferase (GGT) levels were observed in rats with cirrhotic liver. No significant alterations of the liver enzymes were observed following treatment with HPE-XA-08. Histopathology examination of rats treated with HPE-XA-08 at 250 mg/kg body weight, however, exhibited moderate liver protective effects. Reduced extracellular matrix (ECM) proteins within the hepatocytes were noted in comparison to the cirrhotic liver. The curative effects of HPE-XA-08 were observed with marked decrease in the level of ALP (more than 3x) and level of GGT (more than 2x) in cirrhotic rat treated with 600 mg/kg body weight HPE-XA-08 in comparison to cirrhotic rat treated with just water diluent. Reversion of cirrhotic liver to normal liver condition in rats treated with HPE-XA-08 was observed. Results from the present study suggest that HPE-XA-08 treatment assisted in the protection from liver cirrhosis and improved the recovery of cirrhotic liver. PMID:28280515
The shift to early palliative care: a typology of illness journeys and the role of nursing.
Wittenberg-Lyles, Elaine; Goldsmith, Joy; Ragan, Sandra
2011-06-01
For the current study, clinical observations of communication between patients, families, and clinicians during chronic, serious, or terminal illness in a cancer care trajectory were examined for patterns and trends. Five communication characteristics were concluded, which informed a typology of illness journeys experienced by patients with cancer and their families. The isolated journey characterizes an illness path in which communication about terminal prognosis and end-of-life care options are not present; communication is restricted by a curative-only approach to diagnosis as well as the structure of medical care. The rescued journey signifies a transition between curative care (hospital narrative) to noncurative care (hospice narrative), challenging patients and their families with an awareness of dying. The rescued journey allows communication about prognosis and care options, establishes productive experiences through open awareness, and affords patients and families opportunities to experience end-of-life care preferences. Finally, palliative care prior to hospice provides patients and families with an illness journey more readily characterized by open awareness and community, which facilitates a comforted journey. Nurses play a pivotal role in communicating about disease progression and plans of care. The typology presented can inform a structured communication curriculum for nurses and assist in the implementation of early palliative care.
Lee, Jessica J Y; Gottlieb, Michael M; Lever, Jake; Jones, Steven J M; Blau, Nenad; van Karnebeek, Clara D M; Wasserman, Wyeth W
2018-05-01
Phenomics is the comprehensive study of phenotypes at every level of biology: from metabolites to organisms. With high throughput technologies increasing the scope of biological discoveries, the field of phenomics has been developing rapid and precise methods to collect, catalog, and analyze phenotypes. Such methods have allowed phenotypic data to be widely used in medical applications, from assisting clinical diagnoses to prioritizing genomic diagnoses. To channel the benefits of phenomics into the field of inborn errors of metabolism (IEM), we have recently launched IEMbase, an expert-curated knowledgebase of IEM and their disease-characterizing phenotypes. While our efforts with IEMbase have realized benefits, taking full advantage of phenomics requires a comprehensive curation of IEM phenotypes in core phenomics projects, which is dependent upon contributions from the IEM clinical and research community. Here, we assess the inclusion of IEM biochemical phenotypes in a core phenomics project, the Human Phenotype Ontology. We then demonstrate the utility of biochemical phenotypes using a text-based phenomics method to predict gene-disease relationships, showing that the prediction of IEM genes is significantly better using biochemical rather than clinical profiles. The findings herein provide a motivating goal for the IEM community to expand the computationally accessible descriptions of biochemical phenotypes associated with IEM in phenomics resources.
Research resources: curating the new eagle-i discovery system
Vasilevsky, Nicole; Johnson, Tenille; Corday, Karen; Torniai, Carlo; Brush, Matthew; Segerdell, Erik; Wilson, Melanie; Shaffer, Chris; Robinson, David; Haendel, Melissa
2012-01-01
Development of biocuration processes and guidelines for new data types or projects is a challenging task. Each project finds its way toward defining annotation standards and ensuring data consistency with varying degrees of planning and different tools to support and/or report on consistency. Further, this process may be data type specific even within the context of a single project. This article describes our experiences with eagle-i, a 2-year pilot project to develop a federated network of data repositories in which unpublished, unshared or otherwise ‘invisible’ scientific resources could be inventoried and made accessible to the scientific community. During the course of eagle-i development, the main challenges we experienced related to the difficulty of collecting and curating data while the system and the data model were simultaneously built, and a deficiency and diversity of data management strategies in the laboratories from which the source data was obtained. We discuss our approach to biocuration and the importance of improving information management strategies to the research process, specifically with regard to the inventorying and usage of research resources. Finally, we highlight the commonalities and differences between eagle-i and similar efforts with the hope that our lessons learned will assist other biocuration endeavors. Database URL: www.eagle-i.net PMID:22434835
FAF-Drugs3: a web server for compound property calculation and chemical library design
Lagorce, David; Sperandio, Olivier; Baell, Jonathan B.; Miteva, Maria A.; Villoutreix, Bruno O.
2015-01-01
Drug attrition late in preclinical or clinical development is a serious economic problem in the field of drug discovery. These problems can be linked, in part, to the quality of the compound collections used during the hit generation stage and to the selection of compounds undergoing optimization. Here, we present FAF-Drugs3, a web server that can be used for drug discovery and chemical biology projects to help in preparing compound libraries and to assist decision-making during the hit selection/lead optimization phase. Since it was first described in 2006, FAF-Drugs has been significantly modified. The tool now applies an enhanced structure curation procedure, can filter or analyze molecules with user-defined or eight predefined physicochemical filters as well as with several simple ADMET (absorption, distribution, metabolism, excretion and toxicity) rules. In addition, compounds can be filtered using an updated list of 154 hand-curated structural alerts while Pan Assay Interference compounds (PAINS) and other, generally unwanted groups are also investigated. FAF-Drugs3 offers access to user-friendly html result pages and the possibility to download all computed data. The server requires as input an SDF file of the compounds; it is open to all users and can be accessed without registration at http://fafdrugs3.mti.univ-paris-diderot.fr. PMID:25883137
How should the completeness and quality of curated nanomaterial data be evaluated?†
Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.
2016-01-01
Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials’ behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? PMID:27143028
The Astromaterials X-Ray Computed Tomography Laboratory at Johnson Space Center
NASA Astrophysics Data System (ADS)
Zeigler, R. A.; Blumenfeld, E. H.; Srinivasan, P.; McCubbin, F. M.; Evans, C. A.
2018-04-01
The Astromaterials Curation Office has recently begun incorporating X-ray CT data into the curation processes for lunar and meteorite samples, and long-term curation of that data and serving it to the public represent significant technical challenges.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Lenhardt, W. C.; Young, J. W.; Gordon, L. C.; Hughes, S.; Santhana Vannan, S. K.
2017-12-01
The planning for and development of efficient workflows for the creation, reuse, sharing, documentation, publication and preservation of research data is a general challenge that research teams of all sizes face. In response to: requirements from funding agencies for full-lifecycle data management plans that will result in well documented, preserved, and shared research data products increasing requirements from publishers for shared data in conjunction with submitted papers interdisciplinary research team's needs for efficient data sharing within projects, and increasing reuse of research data for replication and new, unanticipated research, policy development, and public use alternative strategies to traditional data life cycle approaches must be developed and shared that enable research teams to meet these requirements while meeting the core science objectives of their projects within the available resources. In support of achieving these goals, the concept of Agile Data Curation has been developed in which there have been parallel activities in support of 1) identifying a set of shared values and principles that underlie the objectives of agile data curation, 2) soliciting case studies from the Earth science and other research communities that illustrate aspects of what the contributors consider agile data curation methods and practices, and 3) identifying or developing design patterns that are high-level abstractions from successful data curation practice that are related to common data curation problems for which common solution strategies may be employed. This paper provides a collection of case studies that have been contributed by the Earth science community, and an initial analysis of those case studies to map them to emerging shared data curation problems and their potential solutions. Following the initial analysis of these problems and potential solutions, existing design patterns from software engineering and related disciplines are identified as a starting point for the development of a catalog of data curation design patterns that may be reused in the design and execution of new data curation processes.
Natural Language Processing in aid of FlyBase curators
Karamanis, Nikiforos; Seal, Ruth; Lewin, Ian; McQuilton, Peter; Vlachos, Andreas; Gasperin, Caroline; Drysdale, Rachel; Briscoe, Ted
2008-01-01
Background Despite increasing interest in applying Natural Language Processing (NLP) to biomedical text, whether this technology can facilitate tasks such as database curation remains unclear. Results PaperBrowser is the first NLP-powered interface that was developed under a user-centered approach to improve the way in which FlyBase curators navigate an article. In this paper, we first discuss how observing curators at work informed the design and evaluation of PaperBrowser. Then, we present how we appraise PaperBrowser's navigational functionalities in a user-based study using a text highlighting task and evaluation criteria of Human-Computer Interaction. Our results show that PaperBrowser reduces the amount of interactions between two highlighting events and therefore improves navigational efficiency by about 58% compared to the navigational mechanism that was previously available to the curators. Moreover, PaperBrowser is shown to provide curators with enhanced navigational utility by over 74% irrespective of the different ways in which they highlight text in the article. Conclusion We show that state-of-the-art performance in certain NLP tasks such as Named Entity Recognition and Anaphora Resolution can be combined with the navigational functionalities of PaperBrowser to support curation quite successfully. PMID:18410678
Text Mining to Support Gene Ontology Curation and Vice Versa.
Ruch, Patrick
2017-01-01
In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.
Stvilia, Besiki
2017-01-01
The importance of managing research data has been emphasized by the government, funding agencies, and scholarly communities. Increased access to research data increases the impact and efficiency of scientific activities and funding. Thus, many research institutions have established or plan to establish research data curation services as part of their Institutional Repositories (IRs). However, in order to design effective research data curation services in IRs, and to build active research data providers and user communities around those IRs, it is essential to study current data curation practices and provide rich descriptions of the sociotechnical factors and relationships shaping those practices. Based on 13 interviews with 15 IR staff members from 13 large research universities in the United States, this paper provides a rich, qualitative description of research data curation and use practices in IRs. In particular, the paper identifies data curation and use activities in IRs, as well as their structures, roles played, skills needed, contradictions and problems present, solutions sought, and workarounds applied. The paper can inform the development of best practice guides, infrastructure and service templates, as well as education in research data curation in Library and Information Science (LIS) schools. PMID:28301533
Lee, Dong Joon; Stvilia, Besiki
2017-01-01
The importance of managing research data has been emphasized by the government, funding agencies, and scholarly communities. Increased access to research data increases the impact and efficiency of scientific activities and funding. Thus, many research institutions have established or plan to establish research data curation services as part of their Institutional Repositories (IRs). However, in order to design effective research data curation services in IRs, and to build active research data providers and user communities around those IRs, it is essential to study current data curation practices and provide rich descriptions of the sociotechnical factors and relationships shaping those practices. Based on 13 interviews with 15 IR staff members from 13 large research universities in the United States, this paper provides a rich, qualitative description of research data curation and use practices in IRs. In particular, the paper identifies data curation and use activities in IRs, as well as their structures, roles played, skills needed, contradictions and problems present, solutions sought, and workarounds applied. The paper can inform the development of best practice guides, infrastructure and service templates, as well as education in research data curation in Library and Information Science (LIS) schools.
Improving the Acquisition and Management of Sample Curation Data
NASA Technical Reports Server (NTRS)
Todd, Nancy S.; Evans, Cindy A.; Labasse, Dan
2011-01-01
This paper discusses the current sample documentation processes used during and after a mission, examines the challenges and special considerations needed for designing effective sample curation data systems, and looks at the results of a simulated sample result mission and the lessons learned from this simulation. In addition, it introduces a new data architecture for an integrated sample Curation data system being implemented at the NASA Astromaterials Acquisition and Curation department and discusses how it improves on existing data management systems.
caNanoLab: data sharing to expedite the use of nanotechnology in biomedicine
Gaheen, Sharon; Hinkal, George W.; Morris, Stephanie A.; Lijowski, Michal; Heiskanen, Mervi
2014-01-01
The use of nanotechnology in biomedicine involves the engineering of nanomaterials to act as therapeutic carriers, targeting agents and diagnostic imaging devices. The application of nanotechnology in cancer aims to transform early detection, targeted therapeutics and cancer prevention and control. To assist in expediting and validating the use of nanomaterials in biomedicine, the National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology, in collaboration with the NCI Alliance for Nanotechnology in Cancer (Alliance), has developed a data sharing portal called caNanoLab. caNanoLab provides access to experimental and literature curated data from the NCI Nanotechnology Characterization Laboratory, the Alliance and the greater cancer nanotechnology community. PMID:25364375
Southan, Christopher; Williams, Antony J; Ekins, Sean
2013-01-01
There is an expanding amount of interest directed at the repurposing and repositioning of drugs, as well as how in silico methods can assist these endeavors. Recent repurposing project tendering calls by the National Center for Advancing Translational Sciences (USA) and the Medical Research Council (UK) have included compound information and pharmacological data. However, none of the internal company development code names were assigned to chemical structures in the official documentation. This not only abrogates in silico analysis to support repurposing but consequently necessitates data gathering and curation to assign structures. Here, we describe the approaches, results and major challenges associated with this. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sievertsen, Niels; Carreira, Erick M
2018-02-01
Mobile devices such as smartphones are carried in the pockets of university students around the globe and are increasingly cheap to come by. These portable devices have evolved into powerful and interconnected handheld computers, which, among other applications, can be used as advanced learning tools and providers of targeted, curated content. Herein, we describe Apoc Social (Advanced Problems in Organic Chemistry Social), a mobile application that assists both learning and teaching college-level organic chemistry both in the classroom and on the go. With more than 750 chemistry exercises available, Apoc Social facilitates collaborative learning through discussion boards and fosters enthusiasm for complex organic chemistry.
Surgery for Locally Recurrent Rectal Cancer: Tips, Tricks, and Pitfalls.
Warrier, Satish K; Heriot, Alexander G; Lynch, Andrew Craig
2016-06-01
Rectal cancer can recur locally in up to 10% of the patients who undergo definitive resection for their primary cancer. Surgical salvage is considered appropriate in the curative setting as well as select cases with palliative intent. Disease-free survival following salvage resection is dependent upon achieving an R0 resection margin. A clear understanding of applied surgical anatomy, appropriate preoperative planning, and a multidisciplinary approach to aggressive soft tissue, bony, and vascular resection with appropriate reconstruction is necessary. Technical tips, tricks, and pitfalls that may assist in managing these cancers are discussed and the roles of additional boost radiation and intraoperative radiation therapy in the management of such cancers are also discussed.
Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.
2013-01-01
The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709
The Nanomaterial Data Curation Initiative (NDCI) explores the critical aspect of data curation within the development of informatics approaches to understanding nanomaterial behavior. Data repositories and tools for integrating and interrogating complex nanomaterial datasets are...
The art and science of data curation: Lessons learned from constructing a virtual collection
NASA Astrophysics Data System (ADS)
Bugbee, Kaylin; Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick
2018-03-01
A digital, or virtual, collection is a value added service developed by libraries that curates information and resources around a topic, theme or organization. Adoption of the virtual collection concept as an Earth science data service improves the discoverability, accessibility and usability of data both within individual data centers but also across data centers and disciplines. In this paper, we introduce a methodology for systematically and rigorously curating Earth science data and information into a cohesive virtual collection. This methodology builds on the geocuration model of searching, selecting and synthesizing Earth science data, metadata and other information into a single and useful collection. We present our experiences curating a virtual collection for one of NASA's twelve Distributed Active Archive Centers (DAACs), the Global Hydrology Resource Center (GHRC), and describe lessons learned as a result of this curation effort. We also provide recommendations and best practices for data centers and data providers who wish to curate virtual collections for the Earth sciences.
NASA Technical Reports Server (NTRS)
Todd, N. S.; Evans, C.
2015-01-01
The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. The suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from the Japanese Hayabusa mission, and solar wind atoms collected during the Genesis mission. To support planetary science research on these samples, NASA's Astromaterials Curation Office hosts the Astromaterials Curation Digital Repository, which provides descriptions of the missions and collections, and critical information about each individual sample. Our office is implementing several informatics initiatives with the goal of better serving the planetary research community. One of these initiatives aims to increase the availability and discoverability of sample data and images through the use of a newly designed common architecture for Astromaterials Curation databases.
NASA Technical Reports Server (NTRS)
Fries, M. D.; Allen, C. C.; Calaway, M. J.; Evans, C. A.; Stansbery, E. K.
2015-01-01
Curation of NASA's astromaterials sample collections is a demanding and evolving activity that supports valuable science from NASA missions for generations, long after the samples are returned to Earth. For example, NASA continues to loan hundreds of Apollo program samples to investigators every year and those samples are often analyzed using instruments that did not exist at the time of the Apollo missions themselves. The samples are curated in a manner that minimizes overall contamination, enabling clean, new high-sensitivity measurements and new science results over 40 years after their return to Earth. As our exploration of the Solar System progresses, upcoming and future NASA sample return missions will return new samples with stringent contamination control, sample environmental control, and Planetary Protection requirements. Therefore, an essential element of a healthy astromaterials curation program is a research and development (R&D) effort that characterizes and employs new technologies to maintain current collections and enable new missions - an Advanced Curation effort. JSC's Astromaterials Acquisition & Curation Office is continually performing Advanced Curation research, identifying and defining knowledge gaps about research, development, and validation/verification topics that are critical to support current and future NASA astromaterials sample collections. The following are highlighted knowledge gaps and research opportunities.
Networks of genetic loci and the scientific literature
NASA Astrophysics Data System (ADS)
Semeiks, J. R.; Grate, L. R.; Mian, I. S.
This work considers biological information graphs, networks in which nodes corre-spond to genetic loci (or "genes") and an (undirected) edge signifies that two genes are discussed in the same article(s) in the scientific literature ("documents"). Operations that utilize the topology of these graphs can assist researchers in the scientific discovery process. For example, a shortest path between two nodes defines an ordered series of genes and documents that can be used to explore the relationship(s) between genes of interest. This work (i) describes how topologies in which edges are likely to reflect genuine relationship(s) can be constructed from human-curated corpora of genes an-notated with documents (or vice versa), and (ii) illustrates the potential of biological information graphs in synthesizing knowledge in order to formulate new hypotheses and generate novel predictions for subsequent experimental study. In particular, the well-known LocusLink corpus is used to construct a biological information graph consisting of 10,297 nodes and 21,910 edges. The large-scale statistical properties of this gene-document network suggest that it is a new example of a power-law network. The segregation of genes on the basis of species and encoded protein molecular function indicate the presence of assortativity, the preference for nodes with similar attributes to be neighbors in a network. The practical utility of a gene-document network is illustrated by using measures such as shortest paths and centrality to analyze a subset of nodes corresponding to genes implicated in aging. Each release of a curated biomedical corpus defines a particular static graph. The topology of a gene-document network changes over time as curators add and/or remove nodes and/or edges. Such a dynamic, evolving corpus provides both the foundation for analyzing the growth and behavior of large complex networks and a substrate for examining trends in biological research.
Text-mining-assisted biocuration workflows in Argo
Rak, Rafal; Batista-Navarro, Riza Theresa; Rowley, Andrew; Carter, Jacob; Ananiadou, Sophia
2014-01-01
Biocuration activities have been broadly categorized into the selection of relevant documents, the annotation of biological concepts of interest and identification of interactions between the concepts. Text mining has been shown to have a potential to significantly reduce the effort of biocurators in all the three activities, and various semi-automatic methodologies have been integrated into curation pipelines to support them. We investigate the suitability of Argo, a workbench for building text-mining solutions with the use of a rich graphical user interface, for the process of biocuration. Central to Argo are customizable workflows that users compose by arranging available elementary analytics to form task-specific processing units. A built-in manual annotation editor is the single most used biocuration tool of the workbench, as it allows users to create annotations directly in text, as well as modify or delete annotations created by automatic processing components. Apart from syntactic and semantic analytics, the ever-growing library of components includes several data readers and consumers that support well-established as well as emerging data interchange formats such as XMI, RDF and BioC, which facilitate the interoperability of Argo with other platforms or resources. To validate the suitability of Argo for curation activities, we participated in the BioCreative IV challenge whose purpose was to evaluate Web-based systems addressing user-defined biocuration tasks. Argo proved to have the edge over other systems in terms of flexibility of defining biocuration tasks. As expected, the versatility of the workbench inevitably lengthened the time the curators spent on learning the system before taking on the task, which may have affected the usability of Argo. The participation in the challenge gave us an opportunity to gather valuable feedback and identify areas of improvement, some of which have already been introduced. Database URL: http://argo.nactem.ac.uk PMID:25037308
Versieux, Leonardo M; Dávila, Nállarett; Delgado, Geadelande C; de Sousa, Valdeci F; de Moura, Edweslley Otaviano; Filgueiras, Tarciso; Alves, Marccus V; Carvalho, Eric; Piotto, Daniel; Forzza, Rafaela C; Calvente, Alice; Jardim, Jomar G
2017-01-01
A National Forest Inventory (NFI) encompassing the entire territory of Brazil is in progress. It is coordinated and promoted by the Brazilian Forest Service of the Ministry of Environment. In each state, the NFI collaborates with local herbaria by receiving collected plant material and performing species identification. Consultants are hired by the NFI and work at the local herbaria under the supervision of a curator. In exchange for curatorial assistance, the NFI provides equipment and consumables for the herbarium. Other public projects collaborating with NFI are Reflora and the Brazilian Biodiversity Information System (SiBBr). Both projects have online platforms that seek to connect herbaria and make all their data freely available, including high quality digital images of specimens. Through inter-institutional collaboration, the joint interests of NFI, Reflora, SiBBr and local herbaria have improved collections, expanded the online Reflora database, and provided the NFI with verified species lists. These strategic uses of public funding are positively affecting Botany, particularly during a period of economic crisis and cuts in research. Here, we illustrate the increase in floristic knowledge through the improvement of a herbarium collection in Rio Grande do Norte (RN) - the Brazilian state with the lowest levels of plant richness. We report 71 new occurrences of vascular plants for RN, belonging mainly to the Poaceae, Fabaceae and Malvaceae. Most of the species with new occurrences have a Neotropical distribution (21 spp.) and only seven are restricted to the Brazilian Northeast. Our findings highlight previous gaps in RN's floristic knowledge. The partnership NFI, Reflora, SiBBr and the UFRN herbarium improved herbarium curation, digital collection, and quality of data. Finally, a fellowship provided by Reflora and SiBBr allowed improving curation by distributing duplicates and incorporating the Herbarium of Câmara Cascudo Museum.
Versieux, Leonardo M.; Dávila, Nállarett; Delgado, Geadelande C.; de Sousa, Valdeci F.; de Moura, Edweslley Otaviano; Filgueiras, Tarciso; Alves, Marccus V.; Carvalho, Eric; Piotto, Daniel; Forzza, Rafaela C.; Calvente, Alice; Jardim, Jomar G.
2017-01-01
Abstract A National Forest Inventory (NFI) encompassing the entire territory of Brazil is in progress. It is coordinated and promoted by the Brazilian Forest Service of the Ministry of Environment. In each state, the NFI collaborates with local herbaria by receiving collected plant material and performing species identification. Consultants are hired by the NFI and work at the local herbaria under the supervision of a curator. In exchange for curatorial assistance, the NFI provides equipment and consumables for the herbarium. Other public projects collaborating with NFI are Reflora and the Brazilian Biodiversity Information System (SiBBr). Both projects have online platforms that seek to connect herbaria and make all their data freely available, including high quality digital images of specimens. Through inter-institutional collaboration, the joint interests of NFI, Reflora, SiBBr and local herbaria have improved collections, expanded the online Reflora database, and provided the NFI with verified species lists. These strategic uses of public funding are positively affecting Botany, particularly during a period of economic crisis and cuts in research. Here, we illustrate the increase in floristic knowledge through the improvement of a herbarium collection in Rio Grande do Norte (RN) – the Brazilian state with the lowest levels of plant richness. We report 71 new occurrences of vascular plants for RN, belonging mainly to the Poaceae, Fabaceae and Malvaceae. Most of the species with new occurrences have a Neotropical distribution (21 spp.) and only seven are restricted to the Brazilian Northeast. Our findings highlight previous gaps in RN’s floristic knowledge. The partnership NFI, Reflora, SiBBr and the UFRN herbarium improved herbarium curation, digital collection, and quality of data. Finally, a fellowship provided by Reflora and SiBBr allowed improving curation by distributing duplicates and incorporating the Herbarium of Câmara Cascudo Museum. PMID:29033668
Give, Celso Soares; Sidat, Mohsin; Ormel, Hermen; Ndima, Sozinho; McCollum, Rosalind; Taegtmeyer, Miriam
2015-09-01
Mozambique launched its revitalized community health programme in 2010 in response to inequitable coverage and quality of health services. The programme is focused on health promotion and disease prevention, with 20 % of community health workers' (known in Mozambique as Agentes Polivalentes Elementares (APEs)) time spent on curative services and 80 % on activities promoting health and preventing illness. We set out to conduct a health system and equity analysis, exploring experiences and expectations of APEs, community members and healthcare workers supervising APEs. This exploratory qualitative study captured the perspectives of a range of participants including women caring for children under 5 years (service clients), community leaders, service providers (APEs) and their supervisors. Participants in the Moamba and Manhiça districts, located in Maputo Province (Mozambique), were selected purposively. In total, 29 in-depth interviews and 9 focus group discussions were conducted in the local language and/or Portuguese. A framework approach was used for analysis, assisted by NVivo10 software. Our analysis revealed that health equity is viewed as linked to the quality and coverage of the APE programme. Demand and supply factors interplay to shape health equity. The availability of responsive and appropriate services led to tensions between community expectations for curative services (and APEs' willingness to perform them) and official policy focusing APE efforts mainly on preventive services and health promotion. The demand for more curative services by community members is a result of having limited access to healthcare services other than those offered by APEs. This study highlights the need to pay attention to the determinants of demand and supply of community interventions in health, to understand the opportunities and challenges of the difficult interface role played by APEs and to create communication among stakeholders in order to build a stronger, more effective and equitable community programme.
Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A
2018-01-01
Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this strategy will significantly advance the curation status of all organism-specific databases in SolCyc resulting in the improvement on database accuracy, data analysis and visualization of biochemical networks in those species. Database URL https://solgenomics.net/tools/solcyc/ PMID:29762652
Wu, Honghan; Oellrich, Anika; Girges, Christine; de Bono, Bernard; Hubbard, Tim J P; Dobson, Richard J B
2017-01-01
Neurodegenerative disorders such as Parkinson's and Alzheimer's disease are devastating and costly illnesses, a source of major global burden. In order to provide successful interventions for patients and reduce costs, both causes and pathological processes need to be understood. The ApiNATOMY project aims to contribute to our understanding of neurodegenerative disorders by manually curating and abstracting data from the vast body of literature amassed on these illnesses. As curation is labour-intensive, we aimed to speed up the process by automatically highlighting those parts of the PDF document of primary importance to the curator. Using techniques similar to those of summarisation, we developed an algorithm that relies on linguistic, semantic and spatial features. Employing this algorithm on a test set manually corrected for tool imprecision, we achieved a macro F 1 -measure of 0.51, which is an increase of 132% compared to the best bag-of-words baseline model. A user based evaluation was also conducted to assess the usefulness of the methodology on 40 unseen publications, which reveals that in 85% of cases all highlighted sentences are relevant to the curation task and in about 65% of the cases, the highlights are sufficient to support the knowledge curation task without needing to consult the full text. In conclusion, we believe that these are promising results for a step in automating the recognition of curation-relevant sentences. Refining our approach to pre-digest papers will lead to faster processing and cost reduction in the curation process. https://github.com/KHP-Informatics/NapEasy. © The Author(s) 2017. Published by Oxford University Press.
Oellrich, Anika; Girges, Christine; de Bono, Bernard; Hubbard, Tim J.P.; Dobson, Richard J.B.
2017-01-01
Abstract Neurodegenerative disorders such as Parkinson’s and Alzheimer’s disease are devastating and costly illnesses, a source of major global burden. In order to provide successful interventions for patients and reduce costs, both causes and pathological processes need to be understood. The ApiNATOMY project aims to contribute to our understanding of neurodegenerative disorders by manually curating and abstracting data from the vast body of literature amassed on these illnesses. As curation is labour-intensive, we aimed to speed up the process by automatically highlighting those parts of the PDF document of primary importance to the curator. Using techniques similar to those of summarisation, we developed an algorithm that relies on linguistic, semantic and spatial features. Employing this algorithm on a test set manually corrected for tool imprecision, we achieved a macro F1-measure of 0.51, which is an increase of 132% compared to the best bag-of-words baseline model. A user based evaluation was also conducted to assess the usefulness of the methodology on 40 unseen publications, which reveals that in 85% of cases all highlighted sentences are relevant to the curation task and in about 65% of the cases, the highlights are sufficient to support the knowledge curation task without needing to consult the full text. In conclusion, we believe that these are promising results for a step in automating the recognition of curation-relevant sentences. Refining our approach to pre-digest papers will lead to faster processing and cost reduction in the curation process. Database URL: https://github.com/KHP-Informatics/NapEasy PMID:28365743
The Library as Partner in University Data Curation: A Case Study in Collaboration
ERIC Educational Resources Information Center
Latham, Bethany; Poe, Jodi Welch
2012-01-01
Data curation is a concept with many facets. Curation goes beyond research-generated data, and its principles can support the preservation of institutions' historical data. Libraries are well-positioned to bring relevant expertise to such problems, especially those requiring collaboration, because of their experience as neutral caretakers and…
ERIC Educational Resources Information Center
Schiano, Deborah
2013-01-01
Curation: to gather, organize, and present resources in a way that meets information needs and interests, makes sense for virtual as well as physical resources. A Northern New Jersey middle school library made the decision to curate its physical resources according to the needs of its users, and, in so doing, created a shelving system that is,…
ERIC Educational Resources Information Center
Mallon, Melissa, Ed.
2012-01-01
In their Top Trends of 2012, the Association of College and Research Libraries (ACRL) named data curation as one of the issues to watch in academic libraries in the near future (ACRL, 2012, p. 312). Data curation can be summarized as "the active and ongoing management of data through its life cycle of interest and usefulness to scholarship,…
Health Care Reform and Concurrent Curative Care for Terminally Ill Children: A Policy Analysis
Lindley, Lisa C.
2012-01-01
Within the Patient Protection and Affordable Care Act of 2010 or health care reform, is a relatively small provision about concurrent curative care that significantly affects terminally ill children. Effective on March 23, 2010, terminally ill children, who are enrolled in a Medicaid or state Children’s Health Insurance Plans (CHIP) hospice benefit, may concurrently receive curative care related to their terminal health condition. The purpose of this article was to conduct a policy analysis of the concurrent curative care legislation by examining the intended goals of the policy to improve access to care and enhance quality of end of life care for terminally ill children. In addition, the policy analysis explored the political feasibility of implementing concurrent curative care at the state-level. Based on this policy analysis, the federal policy of concurrent curative care for children would generally achieve its intended goals. However, important policy omissions focus attention on the need for further federal end of life care legislation for children. These findings have implications nurses. PMID:22822304
Distilling Design Patterns From Agile Curation Case Studies
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Lenhardt, W. C.; Young, J. W.
2016-12-01
In previous work the authors have argued that there is a need to take a new look at the data management lifecycle. Our core argument is that the data management lifecycle needs to be in essence deconstructed and rebuilt. As part of this process we also argue that much can be gained from applying ideas, concepts, and principles from agile software development methods. To be sure we are not arguing for a rote application of these agile software approaches, however, given various trends related to data and technology, it is imperative to update our thinking about how to approach the data management lifecycle, recognize differing project scales, corresponding variations in structure, and alternative models for solving the problems of scientific data curation. In this paper we will describe what we term agile curation design patterns, borrowing the concept of design patterns from the software world and we will present some initial thoughts on agile curation design patterns as informed by a sample of data curation case studies solicited from participants in agile data curation meeting sessions conducted in 2015-16.
Vos, Pieter; De Cock, Paul; Munde, Vera; Petry, Katja; Van Den Noortgate, Wim; Maes, Bea
2012-01-01
Identifying emotions in people with severe and profound intellectual disabilities is a difficult challenge. Since self-reports are not available, behaviour is the most used source of information. Given the limitations and caveats associated with using behaviour as the sole source of information about their emotions, it is important to supplement behavioural information with information from another source. As it is accepted that emotions consist of language, behaviour and physiology, in this article we investigated if physiology could give information about the emotions of people with severe and profound intellectual disabilities. To this aim we tested hypotheses derived from the motivational model of Bradley, Codispoti, Cuthbert, and Lang (2001) about the relation between heart rate and the valence of emotions and between heart rate, skin conductance and skin temperature and behavioural expressions of emotions of people with severe and profound intellectual disability. We presented 27 participants with 4 staff-selected negative and 4 staff-selected positive stimuli. The situations were videotaped and their heart rate, skin conductance and skin temperature was measured. Each behaviour of the participant was coded using the observational method developed by Petry and Maes (2006). As hypothesized, we found a lower heart rate when participants were presented with negative stimuli than when they were presented with positive stimuli in the first 6s of stimuli presentation. Their skin temperature was higher for the expression of low intensity negative emotions compared to the expression of low intensity positive emotions. The results suggest that, as with people without disability, heart rate and skin temperature can give information about the emotions of persons with severe and profound ID. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Burkett, P. J.
2011-01-01
NASA Johnson Space Center operates clean curation facilities for Apollo lunar, Antarctic meteorite, stratospheric cosmic dust, Stardust comet and Genesis solar wind samples. Each of these collections is curated separately due unique requirements. The purpose of this abstract is to highlight the technical tensions between providing particulate cleanliness and molecular cleanliness, illustrated using data from curation laboratories. Strict control of three components are required for curating samples cleanly: a clean environment; clean containers and tools that touch samples; and use of non-shedding materials of cleanable chemistry and smooth surface finish. This abstract focuses on environmental cleanliness and the technical tension between achieving particulate and molecular cleanliness. An environment in which a sample is manipulated or stored can be a room, an enclosed glovebox (or robotic isolation chamber) or an individual sample container.
Powers, Christina M; Hoover, Mark D; Harper, Stacey L
2015-01-01
Summary The Nanomaterial Data Curation Initiative (NDCI), a project of the National Cancer Informatics Program Nanotechnology Working Group (NCIP NanoWG), explores the critical aspect of data curation within the development of informatics approaches to understanding nanomaterial behavior. Data repositories and tools for integrating and interrogating complex nanomaterial datasets are gaining widespread interest, with multiple projects now appearing in the US and the EU. Even in these early stages of development, a single common aspect shared across all nanoinformatics resources is that data must be curated into them. Through exploration of sub-topics related to all activities necessary to enable, execute, and improve the curation process, the NDCI will provide a substantive analysis of nanomaterial data curation itself, as well as a platform for multiple other important discussions to advance the field of nanoinformatics. This article outlines the NDCI project and lays the foundation for a series of papers on nanomaterial data curation. The NDCI purpose is to: 1) present and evaluate the current state of nanomaterial data curation across the field on multiple specific data curation topics, 2) propose ways to leverage and advance progress for both individual efforts and the nanomaterial data community as a whole, and 3) provide opportunities for similar publication series on the details of the interactive needs and workflows of data customers, data creators, and data analysts. Initial responses from stakeholder liaisons throughout the nanoinformatics community reveal a shared view that it will be critical to focus on integration of datasets with specific orientation toward the purposes for which the individual resources were created, as well as the purpose for integrating multiple resources. Early acknowledgement and undertaking of complex topics such as uncertainty, reproducibility, and interoperability is proposed as an important path to addressing key challenges within the nanomaterial community, such as reducing collateral negative impacts and decreasing the time from development to market for this new class of technologies. PMID:26425427
NASA Technical Reports Server (NTRS)
Calaway, Michael J.
2013-01-01
In preparation for OSIRIS-REx and other future sample return missions concerned with analyzing organics, we conducted an Organic Contamination Baseline Study for JSC Curation Labsoratories in FY12. For FY12 testing, organic baseline study focused only on molecular organic contamination in JSC curation gloveboxes: presumably future collections (i.e. Lunar, Mars, asteroid missions) would use isolation containment systems over only cleanrooms for primary sample storage. This decision was made due to limit historical data on curation gloveboxes, limited IR&D funds and Genesis routinely monitors organics in their ISO class 4 cleanrooms.
A semi-automated methodology for finding lipid-related GO terms.
Fan, Mengyuan; Low, Hong Sang; Wenk, Markus R; Wong, Limsoon
2014-01-01
Although semantic similarity in Gene Ontology (GO) and other approaches may be used to find similar GO terms, there is yet a method to systematically find a class of GO terms sharing a common property with high accuracy (e.g., involving human curation). We have developed a methodology to address this issue and applied it to identify lipid-related GO terms, owing to the important and varied roles of lipids in many biological processes. Our methodology finds lipid-related GO terms in a semi-automated manner, requiring only moderate manual curation. We first obtain a list of lipid-related gold-standard GO terms by keyword search and manual curation. Then, based on the hypothesis that co-annotated GO terms share similar properties, we develop a machine learning method that expands the list of lipid-related terms from the gold standard. Those terms predicted most likely to be lipid related are examined by a human curator following specific curation rules to confirm the class labels. The structure of GO is also exploited to help reduce the curation effort. The prediction and curation cycle is repeated until no further lipid-related term is found. Our approach has covered a high proportion, if not all, of lipid-related terms with relatively high efficiency. http://compbio.ddns.comp.nus.edu.sg/∼lipidgo. © The Author(s) 2014. Published by Oxford University Press.
Davis, Allan Peter; Wiegers, Thomas C.; Murphy, Cynthia G.; Mattingly, Carolyn J.
2011-01-01
The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and convert free-text information into a structured format using official nomenclature, integrating third party controlled vocabularies for chemicals, genes, diseases and organisms, and a novel controlled vocabulary for molecular interactions. Manual curation produces a robust, richly annotated dataset of highly accurate and detailed information. Currently, CTD describes over 349 000 molecular interactions between 6800 chemicals, 20 900 genes (for 330 organisms) and 4300 diseases that have been manually curated from over 25 400 peer-reviewed articles. This manually curated data are further integrated with other third party data (e.g. Gene Ontology, KEGG and Reactome annotations) to generate a wealth of toxicogenomic relationships. Here, we describe our approach to manual curation that uses a powerful and efficient paradigm involving mnemonic codes. This strategy allows biocurators to quickly capture detailed information from articles by generating simple statements using codes to represent the relationships between data types. The paradigm is versatile, expandable, and able to accommodate new data challenges that arise. We have incorporated this strategy into a web-based curation tool to further increase efficiency and productivity, implement quality control in real-time and accommodate biocurators working remotely. Database URL: http://ctd.mdibl.org PMID:21933848
ERIC Educational Resources Information Center
Lage, Kathryn; Losoff, Barbara; Maness, Jack
2011-01-01
Increasingly libraries are expected to play a role in scientific data curation initiatives, i.e., "the management and preservation of digital data over the long-term." This case study offers a novel approach for identifying researchers who are receptive toward library involvement in data curation. The authors interviewed researchers at…
Co-Curate: Working with Schools and Communities to Add Value to Open Collections
ERIC Educational Resources Information Center
Cotterill, Simon; Hudson, Martyn; Lloyd, Katherine; Outterside, James; Peterson, John; Coburn, John; Thomas, Ulrike; Tiplady, Lucy; Robinson, Phil; Heslop, Phil
2016-01-01
Co-Curate North East is a cross-disciplinary initiative involving Newcastle University and partner organisations, working with schools and community groups in the North East of England. Co-curation builds on the concept of the "ecomuseum" model for heritage based around a virtual territory, social memory and participative input from the…
ERIC Educational Resources Information Center
Ungerer, Leona M.
2016-01-01
Digital curation may be regarded as a core competency in higher education since it contributes to establishing a sense of metaliteracy (an essential requirement for optimally functioning in a modern media environment) among students. Digital curation is gradually finding its way into higher education curricula aimed at fostering social media…
Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L
2016-01-04
The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Defibrotide in Severe Sinusoidal Obstruction Syndrome: Medicine and Economic Issues.
Steelandt, Julie; Bocquet, François; Cordonnier, Anne-Laure; De Courtivron, Charlotte; Fusier, Isabelle; Paubel, Pascal
2017-02-01
In Europe, Defitelio (defibrotide) has a Market Authorization in curative treatment of severe sinusoidal obstruction syndrome (SOS) but not in prophylaxis (2013). In France, defibrotide has had a compassionate-use program since 2009. Today, the high cost of defibrotide remains a major hurdle for hospital budgets. Medicine and economic issues were evaluated for the 39 hospitals of the French Public Assistance-Hospitals of Paris (AP-HP). We analyzed literature reviews, consumption, and expenditures through AP-HP data in 2014 and patient profiles with defibrotide in the corresponding diagnostic-related groups (DRGs) and consulted a board of hematologists. Finally, 18 publications were selected. Between 2011 and 2014 consumption increased to €5.2M. In 2014, 80 patients receiving defibrotide were mainly ascribed to the DRG "hematopoietic stem cell transplantation" levels 3 or 4. The tariffs attributed to drugs (€3544 to 4084) cover a small part of treatment costs (€97,524 for an adult). French experts thus recommended a harmonization of indications in prophylaxis (off-label use), improvement of pretransplant care, and optimization of the number of vials used. The economic impact led experts to change their practices. They recommended the restriction of defibrotide use to SOS curative treatment and to high-risk situations in prophylaxis. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
The preventive-curative conflict in primary health care.
De Sa, C
1993-04-01
Approximately 80% of the rural population in developing countries do not have access to appropriate curative care. The primary health care (PHC) approach emphasizes promotive and preventive services. Yet most people in developing countries consider curative care to be more important. Thus, PHC should include curative and rehabilitative care along with preventive and promotive care. The conflict between preventive and curative care is apparent at the community level, among health workers from all levels of the health system, and among policy makers. Community members are sometimes willing to pay for curative services but not preventive services. Further, they believe that they already know enough to prevent illness. Community health workers (CHWs), the mainstays of most PHC projects are trained in preventive efforts, but this hinders their effectiveness, since the community expects curative care. Besides, 66% of villagers' health problems require curative care. Further, CHWs are isolated from health professionals, adding to their inability to effect positive change. Health professionals are often unable to set up a relationship of trust with the community, largely due to their urban-based medical education. They tend not to explain treatment to patients or to simplify explanations in a condescending manner. They also mystify diseases, preventing people from understanding their own bodies and managing their illnesses. National governments often misinterpret national health policies promoting PHC and implement them from a top-down approach rather than from the bottom-up PHC-advocated approach. Nongovernmental organizations (NGOs) and international agencies also interpret PHC in different ways. Still, strong partnerships between government, NGOs, private sector, and international agencies are needed for effective implementation of PHC. Yet, many countries continue to have complex hierarchical social structures, inequitable distribution, and inadequate resources, making it difficult to implement effective PHC.
Lunar and Meteorite Thin Sections for Undergraduate and Graduate Studies
NASA Astrophysics Data System (ADS)
Allen, J.; Allen, C.
2012-12-01
The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Studies of rock and soil samples from the Moon and meteorites continue to yield useful information about the early history of the Moon, the Earth, and the inner solar system. Petrographic Thin Section Packages containing polished thin sections of samples from either the Lunar or Meteorite collections have been prepared. Each set of twelve sections of Apollo lunar samples or twelve sections of meteorites is available for loan from JSC. The thin sections sets are designed for use in domestic college and university courses in petrology. The loan period is very strict and limited to two weeks. Contact Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov Each set of slides is accompanied by teaching materials and a sample disk of representative lunar or meteorite samples. It is important to note that the samples in these sets are not exactly the same as the ones listed here. This list represents one set of samples. A key education resource available on the Curation website is Antarctic Meteorite Teaching Collection: Educational Meteorite Thin Sections, originally compiled by Bevan French, Glenn McPherson, and Roy Clarke and revised by Kevin Righter in 2010. Curation Websites College and university staff and students are encouraged to access the Lunar Petrographic Thin Section Set Publication and the Meteorite Petrographic Thin Section Package Resource which feature many thin section images and detailed descriptions of the samples, research results. http://curator.jsc.nasa.gov/Education/index.cfm Request research samples: http://curator.jsc.nasa.gov/ JSC-CURATION-EDUCATION-DISKS@mail.nasa.govLunar Thin Sections; Meteorite Thin Sections;
Actinomycosis: a frequently forgotten disease.
Boyanova, Lyudmila; Kolarov, Rossen; Mateva, Lyudmila; Markovska, Rumyana; Mitov, Ivan
2015-01-01
Actinomycosis is a rare subacute or chronic, endogenous infection mainly by Actinomyces species, showing low virulence through fimbriae and biofilms. Cervicofacial, thoracic, abdominal, pelvic and sometimes cerebral, laryngeal, urinary and other regions can be affected. Actinomycosis mimics other diseases, often malignancy. Disease risk in immunocompromised subjects needs clarification. Diagnosis is often delayed and 'sulfur granules' are helpful but nonspecific. Culture requires immediate specimen transport and prolonged anaerobic incubation. Imaging, histology, cytology, matrix-assisted laser desorption ionization time-of-flight mass spectrometry and molecular methods improve the diagnosis. Actinomycetes are β-lactam susceptible, occasionally resistant. Treatment includes surgery and/or long-term parenteral then oral antibiotics, but some 1-4-week regimens or oral therapy alone were curative. For prophylaxis, oral hygiene and regular intrauterine device replacement are important.
Lynx: a database and knowledge extraction engine for integrative medicine.
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.
Mello, Guilherme Arantes; Viana, Ana Luiza d'Ávila
2011-12-01
Health Centers appeared in the United States around 1910. They provided social assistance in conjunction with some type of medical care. Their original separation between preventive and curative medicine was superseded by the concept of whole health in the 1940s, when Health Center discourse became part of medical education. In the 1960s, the notion of community medicine arose out of the war on poverty. These ideas spread through Brazil in the 1920s and were strengthened under the Vargas policy of national construction, but it was the Serviço Especial de Saúde Pública (Special Public Health Service) that was primarily responsible for lending them their practical and conceptual shape in this country.
Levin, A; Rahman, M A; Quayyum, Z; Routh, S; Barkat-e-Khuda
2001-01-01
This paper seeks to investigate the determinants of child health care seeking behaviours in rural Bangladesh. In particular, the effects of income, women's access to income, and the prices of obtaining child health care are examined. Data on the use of child curative care were collected in two rural areas of Bangladesh--Abhoynagar Thana of Jessore District and Mirsarai Thana of Chittagong District--in March 1997. In estimating the use of child curative care, the nested multinomial logit specification was used. The results of the analysis indicate that a woman's involvement in a credit union or income generation affected the likelihood that curative child care was used. Household wealth decreased the likelihood that the child had an illness episode and affected the likelihood that curative child care was sought. Among facility characteristics, travel time was statistically significant and was negatively associated with the use of a provider.
Reflections on curative health care in Nicaragua.
Slater, R G
1989-01-01
Improved health care in Nicaragua is a major priority of the Sandinista revolution; it has been pursued by major reforms of the national health care system, something few developing countries have attempted. In addition to its internationally recognized advances in public health, considerable progress has been made in health care delivery by expanding curative medical services through training more personnel and building more facilities to fulfill a commitment to free universal health coverage. The very uneven quality of medical care is the leading problem facing curative medicine now. Underlying factors include the difficulty of adequately training the greatly increased number of new physicians. Misdiagnosis and mismanagement continue to be major problems. The curative medical system is not well coordinated with the preventive sector. Recent innovations include initiation of a "medicina integral" residency, similar to family practice. Despite its inadequacies and the handicaps of war and poverty, the Nicaraguan curative medical system has made important progress. PMID:2705603
Is adhesive paper-tape closure of video assisted thoracoscopic port-sites safe?
Luckraz, Heyman; Rammohan, Kandadai S; Phillips, Mabel; O'Keefe, Peter A
2007-07-01
Video assisted thoracoscopic surgery (VATS) is used in lung surgery for diagnostic, staging, curative and palliative purposes. The port-sites are usually sutured with dissolvable sutures. The use of adhesive paper-tape for port-site closure was assessed by a prospective randomised double-blind control trial comparing sutured to adhesive paper-tape closure. The following outcomes were assessed: incidence of clinically significant pneumothorax, wound healing using the ASEPSIS score, patient's comfort (pain score using a visual analog score), the time difference between the two techniques of wound closure and cost savings. Thirty patients were recruited in each group. No clinically significant pneumothoraces occurred in either group. There were no significant differences between the two groups in terms of immediate post-operative pain scores, wound cosmesis and wound complications. It was quicker to close the wound with adhesive paper-tape with a mean time of closure per unit length of wound of 9.3 and 2.2s/mm for the groups, respectively. The cost for wound closure (per patient) was $0.8 for the adhesive paper-tape group and $4.00 for the sutures.
Resources available to the family of the child with cancer.
Monaco, G P
1986-07-15
Progressive and continuing advances in the care of the child with cancer have resulted in potential cure of over 50% of our children. However, no matter how encouraging these statistics, nearly one half of our children now die from their disease. To bring the family through the cancer experience, we must meet the challenge of attending to their practical, spiritual, emotional and experiential requirement from diagnosis, treatment through possible relapse, death, hoped for cure, and survival as an adult with the stigmata of a history of cancer as an obstacle to jobs, insurance, and productive lives, and the further shadow of a possible late second cancer caused by their curative treatment. Families require access to a firm, unfragmented foundation of support, incorporating a multidisciplinary network of resources, involving the combined efforts of the primary health care team and the family's community. Medical and emotional counseling, peer support, spiritual guidance, and special community services contribute to the optimal care of both patient and family. In addition, legal advisory assistance and help with financial planning are important ingredients in assisting families.
NASA Astrophysics Data System (ADS)
Keck, N. N.; Macduff, M.; Martin, T.
2017-12-01
The Atmospheric Radiation Measurement's (ARM) Data Management Facility (DMF) plays a critical support role in processing and curating data generated by the Department of Energy's ARM Program. Data are collected near real time from hundreds of observational instruments spread out all over the globe. Data are then ingested hourly to provide time series data in NetCDF (network Common Data Format) and includes standardized metadata. Based on automated processes and a variety of user reviews the data may need to be reprocessed. Final data sets are then stored and accessed by users through the ARM Archive. Over the course of 20 years, a suite of data visualization tools have been developed to facilitate the operational processes to manage and maintain the more than 18,000 real time events, that move 1.3 TB of data each day through the various stages of the DMF's data system. This poster will present the resources and methodology used to capture metadata and the tools that assist in routine data management and discoverability.
Data and the Shift in Systems, Services, and Literacy
ERIC Educational Resources Information Center
Mitchell, Erik T.
2012-01-01
This month, the "Journal of Web Librarianship" is exploring the idea of data curation and its uses in libraries. The word "data" is as universal now as the word "cloud" was last year, and it is no accident that libraries are exploring how best to support data curation services. Data curation involves library activities in just about every way,…
ERIC Educational Resources Information Center
Hodge, Zach
2017-01-01
Tullahoma City Schools, a rural district in Middle Tennessee, recently switched from traditional static textbooks to an online, open educational resource platform. As a result of this change the role of curator, a teacher who creates the Flexbook by compiling and organizing content, was created. This research project sought to add to the limited…
Jointly creating digital abstracts: dealing with synonymy and polysemy
2012-01-01
Background Ideally each Life Science article should get a ‘structured digital abstract’. This is a structured summary of the paper’s findings that is both human-verified and machine-readable. But articles can contain a large variety of information types and contextual details that all need to be reconciled with appropriate names, terms and identifiers, which poses a challenge to any curator. Current approaches mostly use tagging or limited entry-forms for semantic encoding. Findings We implemented a ‘controlled language’ as a more expressive representation method. We studied how usable this format was for wet-lab-biologists that volunteered as curators. We assessed some issues that arise with the usability of ontologies and other controlled vocabularies, for the encoding of structured information by ‘untrained’ curators. We take a user-oriented viewpoint, and make recommendations that may prove useful for creating a better curation environment: one that can engage a large community of volunteer curators. Conclusions Entering information in a biocuration environment could improve in expressiveness and user-friendliness, if curators would be enabled to use synonymous and polysemous terms literally, whereby each term stays linked to an identifier. PMID:23110757
NASA Astrophysics Data System (ADS)
Palmer, C. L.; Mayernik, M. S.; Weber, N.; Baker, K. S.; Kelly, K.; Marlino, M. R.; Thompson, C. A.
2013-12-01
The need for data curation is being recognized in numerous institutional settings as national research funding agencies extend data archiving mandates to cover more types of research grants. Data curation, however, is not only a practical challenge. It presents many conceptual and theoretical challenges that must be investigated to design appropriate technical systems, social practices and institutions, policies, and services. This presentation reports on outcomes from an investigation of research problems in data curation conducted as part of the Data Curation Education in Research Centers (DCERC) program. DCERC is developing a new model for educating data professionals to contribute to scientific research. The program is organized around foundational courses and field experiences in research and data centers for both master's and doctoral students. The initiative is led by the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign, in collaboration with the School of Information Sciences at the University of Tennessee, and library and data professionals at the National Center for Atmospheric Research (NCAR). At the doctoral level DCERC is educating future faculty and researchers in data curation and establishing a research agenda to advance the field. The doctoral seminar, Research Problems in Data Curation, was developed and taught in 2012 by the DCERC principal investigator and two doctoral fellows at the University of Illinois. It was designed to define the problem space of data curation, examine relevant concepts and theories related to both technical and social perspectives, and articulate research questions that are either unexplored or under theorized in the current literature. There was a particular emphasis on the Earth and environmental sciences, with guest speakers brought in from NCAR, National Snow and Ice Data Center (NSIDC), and Rensselaer Polytechnic Institute. Through the assignments, students constructed dozens of research questions informed by class readings, presentations, and discussions. A technical report is in progress on the resulting research agenda covering: data standards; infrastructure; research context; data reuse; sharing and access; preservation; and conceptual foundations. This presentation will discuss the agenda and its importance for the geosciences, highlighting high priority research questions. It will also introduce the related research to be undertaken by two DCERC doctoral students at NCAR during the 2013-2014 academic year and other data curation research in progress by the doctoral DCERC team.
NASA Technical Reports Server (NTRS)
Snead, C. J.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center has established an Advanced Curation program that is tasked with developing procedures, technologies, and data sets necessary for the curation of future astromaterials collections as envisioned by NASA exploration goals. One particular objective of the Advanced Curation program is the development of new methods for the collection, storage, handling and characterization of small (less than 100 micrometer) particles. Astromaterials Curation currently maintains four small particle collections: Cosmic Dust that has been collected in Earth's stratosphere by ER2 and WB-57 aircraft, Comet 81P/Wild 2 dust returned by NASA's Stardust spacecraft, interstellar dust that was returned by Stardust, and asteroid Itokawa particles that were returned by the JAXA's Hayabusa spacecraft. NASA Curation is currently preparing for the anticipated return of two new astromaterials collections - asteroid Ryugu regolith to be collected by Hayabusa2 spacecraft in 2021 (samples will be provided by JAXA as part of an international agreement), and asteroid Bennu regolith to be collected by the OSIRIS-REx spacecraft and returned in 2023. A substantial portion of these returned samples are expected to consist of small particle components, and mission requirements necessitate the development of new processing tools and methods in order to maximize the scientific yield from these valuable acquisitions. Here we describe initial progress towards the development of applicable sample handling methods for the successful curation of future small particle collections.
Data Curation Education Grounded in Earth Sciences and the Science of Data
NASA Astrophysics Data System (ADS)
Palmer, C. L.
2015-12-01
This presentation looks back over ten years of experience advancing data curation education at two Information Schools, highlighting the vital role of earth science case studies, expertise, and collaborations in development of curriculum and internships. We also consider current data curation practices and workforce demand in data centers in the geosciences, drawing on studies conducted in the Data Curation Education in Research Centers (DCERC) initiative and the Site-Based Data Curation project. Outcomes from this decade of data curation research and education has reinforced the importance of key areas of information science in preparing data professionals to respond to the needs of user communities, provide services across disciplines, invest in standards and interoperability, and promote open data practices. However, a serious void remains in principles to guide education and practice that are distinct to the development of data systems and services that meet both local and global aims. We identify principles emerging from recent empirical studies on the reuse value of data in the earth sciences and propose an approach for advancing data curation education that depends on systematic coordination with data intensive research and propagation of current best practices from data centers into curriculum. This collaborative model can increase both domain-based and cross-disciplinary expertise among data professionals, ultimately improving data systems and services in our universities and data centers while building the new base of knowledge needed for a foundational science of data.
The Physician’s Attitude towards the End of the Existence
PĂTRU, EMILIA; CĂLINA, DANIELA CORNELIA; PĂTRU, C.L.; DOCEA, ANCA OANA; PASCU, ROXANA MARIA
2014-01-01
The physician's attitude towards death, a phenomenon which he frequently encounters in his work practice, is most of the times ambiguous, uncertain, lacking a philosophical significance coherent enough. During the period corresponding to the transition from life to death, when the human being who is about to relinquish life for good lives, suffers, understands and needs assistance, most of the physicians adopt a particular detachment conduct. The physician’s participation in assisting the patient, constant until then, natural, sharply decreases the moment the diagnosis has become, "there is nothing else to be done". This phrase “there is nothing else to be done” should be only the conclusion of one phase of the assistance given by a physician, the curative, healing assistance and the beginning of another one, the phase of “assisting the dying person”, a phase that has to be an integral part of the physician’s mission which represents a more difficult medicine, much more demanding for the physician. At this point, assistance, treatments depend on the ability of the person providing assistance to endure the fear of death in which he is included himself. The necessity of meeting the needs of the dying people has led to the drafting of “a charter of the rights of the dying”. Such charter was drafted during the symposium, “Terminally ill patient and helping person” organized by Wayne State University, Detroit, USA. Taking into account the idea that the dying person “has the right to live until the end” within the best possible conditions the palliative care have been developed. According to the French Society of Palliative Care, 1996, the palliative care aim is to ensuring the patient's quality of life (and not extending it by any means) and that of his family. In these conditions the pain control, the psychological, social and spiritual development are essential. PMID:25729595
Astromaterials Acquisition and Curation Office (KT) Overview
NASA Technical Reports Server (NTRS)
Allen, Carlton
2014-01-01
The Astromaterials Acquisition and Curation Office has the unique responsibility to curate NASA's extraterrestrial samples - from past and forthcoming missions - into the indefinite future. Currently, curation includes documentation, preservation, physical security, preparation, and distribution of samples from the Moon, asteroids, comets, the solar wind, and the planet Mars. Each of these sample sets has a unique history and comes from a unique environment. The curation laboratories and procedures developed over 40 years have proven both necessary and sufficient to serve the evolving needs of a worldwide research community. A new generation of sample return missions to destinations across the solar system is being planned and proposed. The curators are developing the tools and techniques to meet the challenges of these new samples. Extraterrestrial samples pose unique curation requirements. These samples were formed and exist under conditions strikingly different from those on the Earth's surface. Terrestrial contamination would destroy much of the scientific significance of extraterrestrial materials. To preserve the research value of these precious samples, contamination must be minimized, understood, and documented. In addition, the samples must be preserved - as far as possible - from physical and chemical alteration. The elaborate curation facilities at JSC were designed and constructed, and have been operated for many years, to keep sample contamination and alteration to a minimum. Currently, JSC curates seven collections of extraterrestrial samples: (a)) Lunar rocks and soils collected by the Apollo astronauts, (b) Meteorites collected on dedicated expeditions to Antarctica, (c) Cosmic dust collected by high-altitude NASA aircraft,t (d) Solar wind atoms collected by the Genesis spacecraft, (e) Comet particles collected by the Stardust spacecraft, (f) Interstellar dust particles collected by the Stardust spacecraft, and (g) Asteroid soil particles collected by the Japan Aerospace Exploration Agency (JAXA) Hayabusa spacecraft Each of these sample sets has a unique history and comes from a unique environment. We have developed specialized laboratories and practices over many years to preserve and protect the samples, not only for current research but for studies that may be carried out in the indefinite future.
Pigazzi, Alessio; Marshall, Helen; Croft, Julie; Corrigan, Neil; Copeland, Joanne; Quirke, Phil; West, Nick; Rautio, Tero; Thomassen, Niels; Tilney, Henry; Gudgeon, Mark; Bianchi, Paolo Pietro; Edlin, Richard; Hulme, Claire; Brown, Julia
2017-01-01
Importance Robotic rectal cancer surgery is gaining popularity, but limited data are available regarding safety and efficacy. Objective To compare robotic-assisted vs conventional laparoscopic surgery for risk of conversion to open laparotomy among patients undergoing resection for rectal cancer. Design, Setting, and Participants Randomized clinical trial comparing robotic-assisted vs conventional laparoscopic surgery among 471 patients with rectal adenocarcinoma suitable for curative resection conducted at 29 sites across 10 countries, including 40 surgeons. Recruitment of patients was from January 7, 2011, to September 30, 2014, follow-up was conducted at 30 days and 6 months, and final follow-up was on June 16, 2015. Interventions Patients were randomized to robotic-assisted (n = 237) or conventional (n = 234) laparoscopic rectal cancer resection, performed by either high (upper rectum) or low (total rectum) anterior resection or abdominoperineal resection (rectum and perineum). Main Outcomes and Measures The primary outcome was conversion to open laparotomy. Secondary end points included intraoperative and postoperative complications, circumferential resection margin positivity (CRM+) and other pathological outcomes, quality of life (36-Item Short Form Survey and 20-item Multidimensional Fatigue Inventory), bladder and sexual dysfunction (International Prostate Symptom Score, International Index of Erectile Function, and Female Sexual Function Index), and oncological outcomes. Results Among 471 randomized patients (mean [SD] age, 64.9 [11.0] years; 320 [67.9%] men), 466 (98.9%) completed the study. The overall rate of conversion to open laparotomy was 10.1%: 19 of 236 patients (8.1%) in the robotic-assisted laparoscopic group and 28 of 230 patients (12.2%) in the conventional laparoscopic group (unadjusted risk difference = 4.1% [95% CI, −1.4% to 9.6%]; adjusted odds ratio = 0.61 [95% CI, 0.31 to 1.21]; P = .16). The overall CRM+ rate was 5.7%; CRM+ occurred in 14 (6.3%) of 224 patients in the conventional laparoscopic group and 12 (5.1%) of 235 patients in the robotic-assisted laparoscopic group (unadjusted risk difference = 1.1% [95% CI, −3.1% to 5.4%]; adjusted odds ratio = 0.78 [95% CI, 0.35 to 1.76]; P = .56). Of the other 8 reported prespecified secondary end points, including intraoperative complications, postoperative complications, plane of surgery, 30-day mortality, bladder dysfunction, and sexual dysfunction, none showed a statistically significant difference between groups. Results Among 471 randomized patients (mean [SD] age, 64.9 [11.0] years; 320 [67.9%] men), 466 (98.9%) completed the study. The overall rate of conversion to open laparotomy was 10.1%. The overall CRM+ rate was 5.7%. Of the other 8 reported prespecified secondary end points, including intraoperative complications, postoperative complications, plane of surgery, 30-day mortality, bladder dysfunction, and sexual dysfunction, none showed a statistically significant difference between groups. End Point No. With Outcome/Total No. (%) Unadjusted Risk Difference (95% CI), % Adjusted Odds Ratio (95% CI) P Value Conventional Laparoscopy Robotic-Assisted Laparoscopy Conversion to open laparotomy 28/230 (12.2) 19/236 (8.1) 4.1 (−1.4 to 9.6) 0.61 (0.31-1.21) .16 CRM+ 14/224 (6.3) 12/235 (5.1) 1.1 (−3.1 to 5.4) 0.78 (0.35-1.76) .56 Conclusions and Relevance Among patients with rectal adenocarcinoma suitable for curative resection, robotic-assisted laparoscopic surgery, as compared with conventional laparoscopic surgery, did not significantly reduce the risk of conversion to open laparotomy. These findings suggest that robotic-assisted laparoscopic surgery, when performed by surgeons with varying experience with robotic surgery, does not confer an advantage in rectal cancer resection. Trial Registration isrctn.org Identifier: ISRCTN80500123 PMID:29067426
NASA Technical Reports Server (NTRS)
Calaway, Michael J.; Allen, Carlton C.; Allton, Judith H.
2014-01-01
Future robotic and human spaceflight missions to the Moon, Mars, asteroids, and comets will require curating astromaterial samples with minimal inorganic and organic contamination to preserve the scientific integrity of each sample. 21st century sample return missions will focus on strict protocols for reducing organic contamination that have not been seen since the Apollo manned lunar landing program. To properly curate these materials, the Astromaterials Acquisition and Curation Office under the Astromaterial Research and Exploration Science Directorate at NASA Johnson Space Center houses and protects all extraterrestrial materials brought back to Earth that are controlled by the United States government. During fiscal year 2012, we conducted a year-long project to compile historical documentation and laboratory tests involving organic investigations at these facilities. In addition, we developed a plan to determine the current state of organic cleanliness in curation laboratories housing astromaterials. This was accomplished by focusing on current procedures and protocols for cleaning, sample handling, and storage. While the intention of this report is to give a comprehensive overview of the current state of organic cleanliness in JSC curation laboratories, it also provides a baseline for determining whether our cleaning procedures and sample handling protocols need to be adapted and/or augmented to meet the new requirements for future human spaceflight and robotic sample return missions.
OntoMate: a text-mining tool aiding curation at the Rat Genome Database
Liu, Weisong; Laulederkind, Stanley J. F.; Hayman, G. Thomas; Wang, Shur-Jen; Nigam, Rajni; Smith, Jennifer R.; De Pons, Jeff; Dwinell, Melinda R.; Shimoyama, Mary
2015-01-01
The Rat Genome Database (RGD) is the premier repository of rat genomic, genetic and physiologic data. Converting data from free text in the scientific literature to a structured format is one of the main tasks of all model organism databases. RGD spends considerable effort manually curating gene, Quantitative Trait Locus (QTL) and strain information. The rapidly growing volume of biomedical literature and the active research in the biological natural language processing (bioNLP) community have given RGD the impetus to adopt text-mining tools to improve curation efficiency. Recently, RGD has initiated a project to use OntoMate, an ontology-driven, concept-based literature search engine developed at RGD, as a replacement for the PubMed (http://www.ncbi.nlm.nih.gov/pubmed) search engine in the gene curation workflow. OntoMate tags abstracts with gene names, gene mutations, organism name and most of the 16 ontologies/vocabularies used at RGD. All terms/ entities tagged to an abstract are listed with the abstract in the search results. All listed terms are linked both to data entry boxes and a term browser in the curation tool. OntoMate also provides user-activated filters for species, date and other parameters relevant to the literature search. Using the system for literature search and import has streamlined the process compared to using PubMed. The system was built with a scalable and open architecture, including features specifically designed to accelerate the RGD gene curation process. With the use of bioNLP tools, RGD has added more automation to its curation workflow. Database URL: http://rgd.mcw.edu PMID:25619558
A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework.
Bandrowski, A E; Cachat, J; Li, Y; Müller, H M; Sternberg, P W; Ciccarese, P; Clark, T; Marenco, L; Wang, R; Astakhov, V; Grethe, J S; Martone, M E
2012-01-01
The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is 'hidden' from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community. DATABASE URL: http://neuinfo.org.
A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework
Bandrowski, A. E.; Cachat, J.; Li, Y.; Müller, H. M.; Sternberg, P. W.; Ciccarese, P.; Clark, T.; Marenco, L.; Wang, R.; Astakhov, V.; Grethe, J. S.; Martone, M. E.
2012-01-01
The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is ‘hidden’ from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community. Database URL: http://neuinfo.org PMID:22434839
Körner, Philipp; Ehrmann, Katja; Hartmannsgruber, Johann; Metz, Michaela; Steigerwald, Sabrina; Flentje, Michael; van Oorschot, Birgitt
2017-07-01
The benefits of patient-reported symptom assessment combined with integrated palliative care are well documented. This study assessed the symptom burden of palliative and curative-intent radiation oncology patients. Prior to first consultation and at the end of RT, all adult cancer patients planned to receive fractionated percutaneous radiotherapy (RT) were asked to answer the Edmonton Symptom Assessment Scale (ESAS; nine symptoms from 0 = no symptoms to 10 = worst possible symptoms). Mean values were used for curative vs. palliative and pre-post comparisons, and the clinical relevance was evaluated (symptom values ≥ 4). Of 163 participating patients, 151 patients (90.9%) completed both surveys (116 curative and 35 palliative patients). Before beginning RT, 88.6% of palliative and 72.3% of curative patients showed at least one clinically relevant symptom. Curative patients most frequently named decreased general wellbeing (38.6%), followed by tiredness (35.0%), anxiety (32.4%), depression (30.0%), pain (26.3%), lack of appetite (23.5%), dyspnea (17.8%), drowsiness (8.0%) and nausea (6.1%). Palliative patients most frequently named decreased general wellbeing (62.8%), followed by pain (62.8%), tiredness (60.0%), lack of appetite (40.0%), anxiety (38.0%), depression (33.3%), dyspnea (28.5%), drowsiness (25.7%) and nausea (14.2%). At the end of RT, the proportion of curative and palliative patients with a clinically relevant symptom had increased significantly to 79.8 and 91.4%, respectively; whereas the proportion of patients reporting clinically relevant pain had decreased significantly (42.8 vs. 62.8%, respectively). Palliative patients had significantly increased tiredness. Curative patients reported significant increases in pain, tiredness, nausea, drowsiness, lack of appetite and restrictions in general wellbeing. Assessment of patient-reported symptoms was successfully realized in radiation oncology routine. Overall, both groups showed a high symptom burden. The results prove the need of systematic symptom assessment and programs for early integrated supportive and palliative care in radiation oncology.
Fourches, Denis; Muratov, Eugene; Tropsha, Alexander
2010-01-01
Molecular modelers and cheminformaticians typically analyze experimental data generated by other scientists. Consequently, when it comes to data accuracy, cheminformaticians are always at the mercy of data providers who may inadvertently publish (partially) erroneous data. Thus, dataset curation is crucial for any cheminformatics analysis such as similarity searching, clustering, QSAR modeling, virtual screening, etc., especially nowadays when the availability of chemical datasets in public domain has skyrocketed in recent years. Despite the obvious importance of this preliminary step in the computational analysis of any dataset, there appears to be no commonly accepted guidance or set of procedures for chemical data curation. The main objective of this paper is to emphasize the need for a standardized chemical data curation strategy that should be followed at the onset of any molecular modeling investigation. Herein, we discuss several simple but important steps for cleaning chemical records in a database including the removal of a fraction of the data that cannot be appropriately handled by conventional cheminformatics techniques. Such steps include the removal of inorganic and organometallic compounds, counterions, salts and mixtures; structure validation; ring aromatization; normalization of specific chemotypes; curation of tautomeric forms; and the deletion of duplicates. To emphasize the importance of data curation as a mandatory step in data analysis, we discuss several case studies where chemical curation of the original “raw” database enabled the successful modeling study (specifically, QSAR analysis) or resulted in a significant improvement of model's prediction accuracy. We also demonstrate that in some cases rigorously developed QSAR models could be even used to correct erroneous biological data associated with chemical compounds. We believe that good practices for curation of chemical records outlined in this paper will be of value to all scientists working in the fields of molecular modeling, cheminformatics, and QSAR studies. PMID:20572635
Recovery, Transportation and Acceptance to the Curation Facility of the Hayabusa Re-Entry Capsule
NASA Technical Reports Server (NTRS)
Abe, M.; Fujimura, A.; Yano, H.; Okamoto, C.; Okada, T.; Yada, T.; Ishibashi, Y.; Shirai, K.; Nakamura, T.; Noguchi, T.;
2011-01-01
The "Hayabusa" re-entry capsule was safely carried into the clean room of Sagamihara Planetary Sample Curation Facility in JAXA on June 18, 2010. After executing computed tomographic (CT) scanning, removal of heat shield, and surface cleaning of sample container, the sample container was enclosed into the clean chamber. After opening the sample container and residual gas sampling in the clean chamber, optical observation, sample recovery, sample separation for initial analysis will be performed. This curation work is continuing for several manths with some selected member of Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET). We report here on the 'Hayabusa' capsule recovery operation, and transportation and acceptance at the curation facility of the Hayabusa re-entry capsule.
Lynx: a database and knowledge extraction engine for integrative medicine
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788
How Can Gastric Cancer Molecular Profiling Guide Future Therapies?
Corso, Simona; Giordano, Silvia
2016-07-01
Gastric cancer is the third greatest global cause of cancer-related deaths. Despite its high prevalence, only recently have comprehensive genomic surveys shed light on its molecular alterations. As surgery is the only curative treatment strategy and chemotherapy has shown limited efficacy, new treatments are urgently needed. Many molecular therapies for gastric cancer have entered clinical trials but-apart from Trastuzumab and Ramucirumab-all have failed. We analyze the current knowledge of the genetic 'landscape' of gastric cancers, elaborating on novel, preclinical approaches. We posit that this knowledge lays the basis for identifying bona fide molecular targets and developing solid therapeutic approaches, requiring accurate patient selection and taking advantage of preclinical models to assist clinical development of novel combination strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sulaiman, Akebaier; Wan, Xuefeng; Fan, Junwei; Kasimu, Hadiliya; Dong, Xiaoyang; Wang, Xiaodong; Zhang, Lijuan; Abliz, Paride; Upur, Halmurat
2017-05-01
The paper is intended to analyze and evaluate the specific curative effect and safety of 2% liranaftate ointment in treating patients with tinea pedis and tinea cruris. 1,100 cases of patients with tinea pedis and tinea corporis & cruris were selected as research objects and were divided into two groups according to the random number table method. They were treated with different methods: 550 cases of patients were treated with 2% liranaftate ointment for external use in the observation group and the rest 550 cases of patients were treated with 1% bifonazole cream in the control group. The treatment time was two weeks for patients with tinea corporis & cruris and four weeks for those with tinea pedis respectively. Meanwhile, the one-month follow-up visit was conducted among the patients to compare the curative effects of two groups. After the medication, the curative effectiveness rate was 87.65% (482/550) in the observation group, while that was 84.91% (467/550) in the control group. After the average follow-up visits of (15.5±2.4), the curative effectiveness rate 96.55% (531/550) in the observation group, while that was 91.45% (503/550) in the control group. Two groups of patients recovered well with a low incidence of adverse reactions in the treatment, and the overall curative effect was good with the inter-group difference at P>0.05, so it was without statistical significance. The curative effect of 2% liranaftate ointment is safe and obvious in treating tinea pedis and tinea corporis & cruris, so it is valuable for clinical popularization and application.
Colak, Emine; Ustuner, Mehmet Cengiz; Tekin, Neslihan; Colak, Ertugrul; Burukoglu, Dilek; Degirmenci, Irfan; Gunes, Hasan Veysi
2016-01-01
Cynara scolymus is a pharmacologically important medicinal plant containing phenolic acids and flavonoids. Experimental studies indicate antioxidant and hepatoprotective effects of C. scolymus but there have been no studies about therapeutic effects of liver diseases yet. In the present study, hepatocurative effects of C. scolymus leaf extract on carbon tetrachloride (CCl4)-induced oxidative stress and hepatic injury in rats were investigated by serum hepatic enzyme levels, oxidative stress indicator (malondialdehyde-MDA), endogenous antioxidants, DNA fragmentation, p53, caspase 3 and histopathology. Animals were divided into six groups: control, olive oil, CCl4, C. scolymus leaf extract, recovery and curative. CCl4 was administered at a dose of 0.2 mL/kg twice daily on CCl4, recovery and curative groups. Cynara scolymus extract was given orally for 2 weeks at a dose of 1.5 g/kg after CCl4 application on the curative group. Significant decrease of serum alanine-aminotransferase (ALT) and aspartate-aminotransferase (AST) levels were determined in the curative group. MDA levels were significantly lower in the curative group. Significant increase of superoxide dismutase (SOD) and catalase (CAT) activity in the curative group was determined. In the curative group, C. scolymus leaf extract application caused the DNA % fragmentation, p53 and caspase 3 levels of liver tissues towards the normal range. Our results indicated that C. scolymus leaf extract has hepatocurative effects of on CCl4-induced oxidative stress and hepatic injury by reducing lipid peroxidation, providing affected antioxidant systems towards the normal range. It also had positive effects on the pathway of the regulatory mechanism allowing repair of DNA damage on CCl4-induced hepatotoxicity.
van Putten, Margreet; Koëter, Marijn; van Laarhoven, Hanneke W M; Lemmens, Valery E P P; Siersema, Peter D; Hulshof, Maarten C C M; Verhoeven, Rob H A; Nieuwenhuijzen, Grard A P
2018-02-01
The aim of this article was to study the influence of hospital of diagnosis on the probability of receiving curative treatment and its impact on survival among patients with esophageal cancer (EC). Although EC surgery is centralized in the Netherlands, the disease is often diagnosed in hospitals that do not perform this procedure. Patients with potentially curable esophageal or gastroesophageal junction tumors diagnosed between 2005 and 2013 who were potentially curable (cT1-3,X, any N, M0,X) were selected from the Netherlands Cancer Registry. Multilevel logistic regression was performed to examine the probability to undergo curative treatment (resection with or without neoadjuvant treatment, definitive chemoradiotherapy, or local tumor excision) according to hospital of diagnosis. Effects of variation in probability of undergoing curative treatment among these hospitals on survival were investigated by Cox regression. All 13,017 patients with potentially curable EC, diagnosed in 91 hospitals, were included. The proportion of patients receiving curative treatment ranged from 37% to 83% and from 45% to 86% in the periods 2005-2009 and 2010-2013, respectively, depending on hospital of diagnosis. After adjustment for patient- and hospital-related characteristics these proportions ranged from 41% to 77% and from 50% to 82%, respectively (both P < 0.001). Multivariable survival analyses showed that patients diagnosed in hospitals with a low probability of undergoing curative treatment had a worse overall survival (hazard ratio = 1.13, 95% confidence interval 1.06-1.20; hazard ratio = 1.15, 95% confidence interval 1.07-1.24). The variation in probability of undergoing potentially curative treatment for EC between hospitals of diagnosis and its impact on survival indicates that treatment decision making in EC may be improved.
Automatic reconstruction of a bacterial regulatory network using Natural Language Processing
Rodríguez-Penagos, Carlos; Salgado, Heladia; Martínez-Flores, Irma; Collado-Vides, Julio
2007-01-01
Background Manual curation of biological databases, an expensive and labor-intensive process, is essential for high quality integrated data. In this paper we report the implementation of a state-of-the-art Natural Language Processing system that creates computer-readable networks of regulatory interactions directly from different collections of abstracts and full-text papers. Our major aim is to understand how automatic annotation using Text-Mining techniques can complement manual curation of biological databases. We implemented a rule-based system to generate networks from different sets of documents dealing with regulation in Escherichia coli K-12. Results Performance evaluation is based on the most comprehensive transcriptional regulation database for any organism, the manually-curated RegulonDB, 45% of which we were able to recreate automatically. From our automated analysis we were also able to find some new interactions from papers not already curated, or that were missed in the manual filtering and review of the literature. We also put forward a novel Regulatory Interaction Markup Language better suited than SBML for simultaneously representing data of interest for biologists and text miners. Conclusion Manual curation of the output of automatic processing of text is a good way to complement a more detailed review of the literature, either for validating the results of what has been already annotated, or for discovering facts and information that might have been overlooked at the triage or curation stages. PMID:17683642
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putman, Tim E.; Lelong, Sebastien; Burgstaller-Muehlbacher, Sebastian
With the advancement of genome-sequencing technologies, new genomes are being sequenced daily. Although these sequences are deposited in publicly available data warehouses, their functional and genomic annotations (beyond genes which are predicted automatically) mostly reside in the text of primary publications. Professional curators are hard at work extracting those annotations from the literature for the most studied organisms and depositing them in structured databases. However, the resources don’t exist to fund the comprehensive curation of the thousands of newly sequenced organisms in this manner. Here, we describe WikiGenomes (wikigenomes.org), a web application that facilitates the consumption and curation of genomicmore » data by the entire scientific community. WikiGenomes is based on Wikidata, an openly editable knowledge graph with the goal of aggregating published knowledge into a free and open database. WikiGenomes empowers the individual genomic researcher to contribute their expertise to the curation effort and integrates the knowledge into Wikidata, enabling it to be accessed by anyone without restriction.« less
Yoshimatsu, Kayo; Kawano, Noriaki; Kawahara, Nobuo; Akiyama, Hiroshi; Teshima, Reiko; Nishijima, Masahiro
2012-01-01
Developments in the use of genetically modified plants for human and livestock health and phytoremediation were surveyed using information retrieved from Entrez PubMed, Chemical Abstracts Service, Google, congress abstracts and proceedings of related scientific societies, scientific journals, etc. Information obtained was classified into 8 categories according to the research objective and the usage of the transgenic plants as 1: nutraceuticals (functional foods), 2: oral vaccines, 3: edible curatives, 4: vaccine antigens, 5: therapeutic antibodies, 6: curatives, 7: diagnostic agents and reagents, and 8: phytoremediation. In total, 405 cases were collected from 2006 to 2010. The numbers of cases were 120 for nutraceuticals, 65 for oral vaccines, 25 for edible curatives, 36 for vaccine antigens, 36 for therapeutic antibodies, 76 for curatives, 15 for diagnostic agents and reagents, and 40 for phytoremediation (sum of each cases was 413 because some reports were related to several categories). Nutraceuticals, oral vaccines and curatives were predominant. The most frequently used edible crop was rice (51 cases), and tomato (28 cases), lettuce (22 cases), potato (18 cases), corn (15 cases) followed.
Putman, Tim E.; Lelong, Sebastien; Burgstaller-Muehlbacher, Sebastian; ...
2017-03-06
With the advancement of genome-sequencing technologies, new genomes are being sequenced daily. Although these sequences are deposited in publicly available data warehouses, their functional and genomic annotations (beyond genes which are predicted automatically) mostly reside in the text of primary publications. Professional curators are hard at work extracting those annotations from the literature for the most studied organisms and depositing them in structured databases. However, the resources don’t exist to fund the comprehensive curation of the thousands of newly sequenced organisms in this manner. Here, we describe WikiGenomes (wikigenomes.org), a web application that facilitates the consumption and curation of genomicmore » data by the entire scientific community. WikiGenomes is based on Wikidata, an openly editable knowledge graph with the goal of aggregating published knowledge into a free and open database. WikiGenomes empowers the individual genomic researcher to contribute their expertise to the curation effort and integrates the knowledge into Wikidata, enabling it to be accessed by anyone without restriction.« less
STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation
2013-01-01
Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969
An Analysis of the Climate Data Initiative's Data Collection
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Bugbee, K.
2015-12-01
The Climate Data Initiative (CDI) is a broad multi-agency effort of the U.S. government that seeks to leverage the extensive existing federal climate-relevant data to stimulate innovation and private-sector entrepreneurship to support national climate-change preparedness. The CDI project is a systematic effort to manually curate and share openly available climate data from various federal agencies. To date, the CDI has curated seven themes, or topics, relevant to climate change resiliency. These themes include Coastal Flooding, Food Resilience, Water, Ecosystem Vulnerability, Human Health, Energy Infrastructure, and Transportation. Each theme was curated by subject matter experts who selected datasets relevant to the topic at hand. An analysis of the entire Climate Data Initiative data collection and the data curated for each theme offers insights into which datasets are considered most relevant in addressing climate resiliency. Other aspects of the data collection will be examined including which datasets were the most visited or popular and which datasets were the most sought after for curation by the theme teams. Results from the analysis of the CDI collection will be presented in this talk.
CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database
Jia, Baofeng; Raphenya, Amogelang R.; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K.; Lago, Briony A.; Dave, Biren M.; Pereira, Sheldon; Sharma, Arjun N.; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E.; Frye, Jonathan G.; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L.; Pawlowski, Andrew C.; Johnson, Timothy A.; Brinkman, Fiona S.L.; Wright, Gerard D.; McArthur, Andrew G.
2017-01-01
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis. PMID:27789705
Payao: a community platform for SBML pathway model curation
Matsuoka, Yukiko; Ghosh, Samik; Kikuchi, Norihiro; Kitano, Hiroaki
2010-01-01
Summary: Payao is a community-based, collaborative web service platform for gene-regulatory and biochemical pathway model curation. The system combines Web 2.0 technologies and online model visualization functions to enable a collaborative community to annotate and curate biological models. Payao reads the models in Systems Biology Markup Language format, displays them with CellDesigner, a process diagram editor, which complies with the Systems Biology Graphical Notation, and provides an interface for model enrichment (adding tags and comments to the models) for the access-controlled community members. Availability and implementation: Freely available for model curation service at http://www.payaologue.org. Web site implemented in Seaser Framework 2.0 with S2Flex2, MySQL 5.0 and Tomcat 5.5, with all major browsers supported. Contact: kitano@sbi.jp PMID:20371497
NASA Technical Reports Server (NTRS)
Calaway, M. J.; Allton, J. H.; Zeigler, R. A.; McCubbin, F. M.
2017-01-01
The Apollo program's Lunar Receiving Laboratory (LRL), building 37 at NASA's Manned Spaceflight Center (MSC), now Johnson Space Center (JSC), in Houston, TX, was the world's first astronaut and extraterrestrial sample quarantine facility (Fig. 1). It was constructed by Warrior Construction Co. and Warrior-Natkin-National at a cost of $8.1M be-tween August 10, 1966 and June 26, 1967. In 1969, the LRL received and curated the first collection of extra-terrestrial samples returned to Earth; the rock and soil samples of the Apollo 11 mission. This year, the JSC Astromaterials Acquisition and Curation Office (here-after JSC curation) celebrates 50 years since the opening of the LRL and its legacy of laying the foundation for modern curation of extraterrestrial samples.
Curating and Integrating Data from Multiple Sources to Support Healthcare Analytics.
Ng, Kenney; Kakkanatt, Chris; Benigno, Michael; Thompson, Clay; Jackson, Margaret; Cahan, Amos; Zhu, Xinxin; Zhang, Ping; Huang, Paul
2015-01-01
As the volume and variety of healthcare related data continues to grow, the analysis and use of this data will increasingly depend on the ability to appropriately collect, curate and integrate disparate data from many different sources. We describe our approach to and highlight our experiences with the development of a robust data collection, curation and integration infrastructure that supports healthcare analytics. This system has been successfully applied to the processing of a variety of data types including clinical data from electronic health records and observational studies, genomic data, microbiomic data, self-reported data from surveys and self-tracked data from wearable devices from over 600 subjects. The curated data is currently being used to support healthcare analytic applications such as data visualization, patient stratification and predictive modeling.
Building an efficient curation workflow for the Arabidopsis literature corpus
Li, Donghui; Berardini, Tanya Z.; Muller, Robert J.; Huala, Eva
2012-01-01
TAIR (The Arabidopsis Information Resource) is the model organism database (MOD) for Arabidopsis thaliana, a model plant with a literature corpus of about 39 000 articles in PubMed, with over 4300 new articles added in 2011. We have developed a literature curation workflow incorporating both automated and manual elements to cope with this flood of new research articles. The current workflow can be divided into two phases: article selection and curation. Structured controlled vocabularies, such as the Gene Ontology and Plant Ontology are used to capture free text information in the literature as succinct ontology-based annotations suitable for the application of computational analysis methods. We also describe our curation platform and the use of text mining tools in our workflow. Database URL: www.arabidopsis.org PMID:23221298
Digital Curation of Earth Science Samples Starts in the Field
NASA Astrophysics Data System (ADS)
Lehnert, K. A.; Hsu, L.; Song, L.; Carter, M. R.
2014-12-01
Collection of physical samples in the field is an essential part of research in the Earth Sciences. Samples provide a basis for progress across many disciplines, from the study of global climate change now and over the Earth's history, to present and past biogeochemical cycles, to magmatic processes and mantle dynamics. The types of samples, methods of collection, and scope and scale of sampling campaigns are highly diverse, ranging from large-scale programs to drill rock and sediment cores on land, in lakes, and in the ocean, to environmental observation networks with continuous sampling, to single investigator or small team expeditions to remote areas around the globe or trips to local outcrops. Cyberinfrastructure for sample-related fieldwork needs to cater to the different needs of these diverse sampling activities, aligning with specific workflows, regional constraints such as connectivity or climate, and processing of samples. In general, digital tools should assist with capture and management of metadata about the sampling process (location, time, method) and the sample itself (type, dimension, context, images, etc.), management of the physical objects (e.g., sample labels with QR codes), and the seamless transfer of sample metadata to data systems and software relevant to the post-sampling data acquisition, data processing, and sample curation. In order to optimize CI capabilities for samples, tools and workflows need to adopt community-based standards and best practices for sample metadata, classification, identification and registration. This presentation will provide an overview and updates of several ongoing efforts that are relevant to the development of standards for digital sample management: the ODM2 project that has generated an information model for spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples, aligned with OGC's Observation & Measurements model (Horsburgh et al, AGU FM 2014); implementation of the IGSN (International Geo Sample Number) as a globally unique sample identifier via a distributed system of allocating agents and a central registry; and the EarthCube Research Coordination Network iSamplES (Internet of Samples in the Earth Sciences) that aims to improve sharing and curation of samples through the use of CI.
Hayabusa Reentry and Recovery of Its Capsule -Quick Report
NASA Astrophysics Data System (ADS)
Kawaguchi, Junichiro; Yoshikawa, Makoto; Kuninaka, Hitoshi
The Hayabusa spacecraft successfully returned to the Earth and re-entered into the atmosphere for sample recovery after also the successful touching-downs to NEO Itokawa in 2005. The reentry occurred on June 13th, and took place in Woomera Prohibited Area (WPA) of Australia. This paper presents how the reentry and recovery operations were performed, and also reports the current status about the sample curation activity. The Hayabusa mission aims at demonstrating key technologies requisite for future real Sample and Return missions. However, the spacecraft adopted the actual Sample and Return flight sequence and was designed to make a world's first round trip to an extra terrestrial object with touching-down and lifting-off. It is the spacecraft propelled by the ion engines aboard for interplanetary cruise. The Hayabusa spacecraft launched in May of 2003 reached NEO Itokawa in September of 2005 via Earth gravity assist in May of 2004. It stayed there for about two and a half months, and performed detailed scientific observation and mapping and determination of the shape. In November of 2005, the spacecraft made two touching-downs and lifting-offs having attempted collection of surface sample. At the second opportunity, the spacecraft directed shooting a projectile. But, due to the programming problem, presumably the projectile was not shot. However, the spacecraft may have captured some small amount of sample particles in a catcher aboard, when the spacecraft made actual touches down to the surface. The spacecraft suffered from fuel leak in December of 2005, and the communication resumed after seven weeks of hiatus. And the ion engines all faced their life by November of 2009, and the project team devised an alternative drive configuration and successfully coped with the difficulty. Despite many hardships, the spacecraft has been operated for return cruise, and it made a reentry for sample recovery this June. The sample catcher was retrieved at WPA and transported back to the curation facility of JAXA. Currently the curators have examined analyzed the catcher recovered. This presentation quickly reports recent status of the spacecraft, capsule and sample analysis.
Application of curative therapy in the ward. 1920.
Marble, Henry Chase
2009-06-01
This Classic article is a reprint of the original work by Henry Chase Marble, Application of Curative Therapy in the Ward. An accompanying biographical sketch on Henry Chase Marble, MD, is available at DOI 10.1007/s11999-009-0789-7 . The Classic Article is (c)1920 by the Journal of Bone and Joint Surgery, Inc. and is reprinted with permission from Marble HC. Application of curative therapy in the ward. J Bone Joint Surg Am. 1920;2:136-138.
Capizzi, R L
1996-01-01
In the bench to bedside development of drugs to treat patients with cancer, the common guide to dose and schedule selection is toxicity to normal organs patterned after the preclinical profile of the drug. An understanding of the cellular pharmacology of the drug and specifically the cellular targets linked to the drug's effect is of substantial value in assisting the clinical investigator in selecting the proper dose and schedule of drug administration. The clinical development of ara-C for the treatment of acute myeloid leukemia (AML) provides a useful paradigm for the study of this process. An understanding of the cellular pharmacology, cytokinetics and pharmacokinetics of ara-C in leukemic mice showed substantial schedule-dependency. Exposure to high doses for a short duration (C x t) resulted in a palliative therapeutic outcome. In marked contrast, exposure to lower doses for a protracted period (c x T) was curative. Clinical use of ara-C in patients with AML patterned after the murine experience, c x T approach, has been of limited benefit in terms of long-term disease-free survival. Studies with human leukemia blasts from patients have shown that for the majority of patients, the initial rate-limiting step is membrane transport, the characteristics of which are substantially affected by extracellular drug concentration (dose). This pharmacologic impediment is eliminated with the blood levels attained during the infusion of gram doses (1-3 gm/m2) of the drug (high-dose ara-C, HiDaC) for shorter periods of time, a C x t approach. Clinical confirmation of these pharmacologic observations is evident in the therapeutic efficacy of HiDaC in patients with relapsed or SDaC-refractory acute leukemia. This is further emphasized by the significantly improved leukemia-free survival of patients with AML treated with HiDaC intensification during remission compared to those patients treated with milligram doses typical of SDaC protocols. Thus, the identification and monitoring of important parameters of drug action in tumors during the course of a clinical trial can be of substantial assistance in optimizing drug dose and schedule so as to attain the best therapeutic index.
NASA Astrophysics Data System (ADS)
Grimaldi, David; Engel, Michael S.
2005-05-01
This book chronicles the complete evolutionary history of insects--their living diversity and relationships as well as 400 million years of fossils. Introductory sections cover the living species diversity of insects, methods of reconstructing evolutionary relationships, basic insect structure, and the diverse modes of insect fossilization and major fossil deposits. Major sections then explore the relationships and evolution of each order of hexapods. The volume also chronicles major episodes in the evolutionary history of insects from their modest beginnings in the Devonian and the origin of wings hundreds of millions of years before pterosaurs and birds to the impact of mass extinctions and the explosive radiation of angiosperms on insects, and how they evolved into the most complex societies in nature. Whereas other volumes focus on either living species or fossils, this is the first comprehensive synthesis of all aspects of insect evolution. Illustrated with 955 photo- and electron- micrographs, drawings, diagrams, and field photos, many in full color and virtually all of them original, this reference will appeal to anyone engaged with insect diversity--professional entomologists and students, insect and fossil collectors, and naturalists. David Grimaldi and Michael S. Engel have collectively published over 200 scientific articles and monographs on the relationships and fossil record of insects, including 10 articles in the journals Science, Nature, and Proceedings of the National Academy of Sciences. David Grimaldi is curator in the Division of Invertebrate Zoology, American Museum of Natural History and adjunct professor at Cornell University, Columbia University, and the City University of New York. David Grimaldi has traveled in 40 countries on 6 continents, collecting and studying recent species of insects and conducting fossil excavations. He is the author of Amber: Window to the Past (Abrams, 2003). Michael S. Engel is an assistant professor in the Division of Entomology at the University of Kansas; assistant curator at the Natural History Museum, University of Kansas; research associate of the American Museum of Natural History; and fellow of the Linnean Society of London. Engel has visited numerous countries for entomological and paleontological studies, doing most of his fieldwork in Central Asia, Asia Minor, and the Western Hemisphere.
Garboś, Sławomir; Swiecicka, Dorota
2011-01-01
Maximum admissible concentration level (MACL) of barium in natural mineral waters, natural spring waters and potable waters was set at the level of 1 mg/l, while MACL of this element in natural curative waters intended for drinking therapies and inhalations were set at the levels of 1.0 mg/l and 10.0 mg/l, respectively. Those requirements were related to therapies which are applied longer than one month. Above mentioned maximum admissible concentration levels of barium in consumed waters were established after taking into account actual criteria of World Health Organization which determined the guidelines value for this element in water intended for human consumption at the level of 0.7 mg/l. In this work developed and validated method of determination of barium by inductively coupled plasma emission spectrometry technique was applied for determination of this element in 45 natural curative waters sampled from 24 spa districts situated on the area of Poland. Concentrations of barium determined were in the range from 0.0036 mg/l to 24.0 mg/l. Natural curative waters characterized by concentrations of barium in the ranges of 0.0036 - 0.073 mg/l, 0.0036 - 1.31 mg/l and 0.0036 - 24.0 mg/l, were applied to drinking therapy, inhalations and balneotherapy, respectively (some of waters analyzed were simultaneously applied to drinking therapy, inhalations and balneotherapy). In the cases of 11 natural curative waters exceeding limit of 1 mg/l were observed, however they were classified mainly as waters applied to balneotherapy and in two cases to inhalation therapies (concentrations of barium - 1.08 mg/l and 1.31 mg/l). The procedure of classification of curative waters for adequate therapies based among other things on barium concentrations meets requirements of the Decree of Minister of Health from 13 April 2006 on the range of studies indispensable for establishing medicinal properties of natural curative materials and curative properties of climate, criteria of their assessment and a specimen of certificate confirmed those properties.
neXtA5: accelerating annotation of articles via automated approaches in neXtProt.
Mottin, Luc; Gobeill, Julien; Pasche, Emilie; Michel, Pierre-André; Cusin, Isabelle; Gaudet, Pascale; Ruch, Patrick
2016-01-01
The rapid increase in the number of published articles poses a challenge for curated databases to remain up-to-date. To help the scientific community and database curators deal with this issue, we have developed an application, neXtA5, which prioritizes the literature for specific curation requirements. Our system, neXtA5, is a curation service composed of three main elements. The first component is a named-entity recognition module, which annotates MEDLINE over some predefined axes. This report focuses on three axes: Diseases, the Molecular Function and Biological Process sub-ontologies of the Gene Ontology (GO). The automatic annotations are then stored in a local database, BioMed, for each annotation axis. Additional entities such as species and chemical compounds are also identified. The second component is an existing search engine, which retrieves the most relevant MEDLINE records for any given query. The third component uses the content of BioMed to generate an axis-specific ranking, which takes into account the density of named-entities as stored in the Biomed database. The two ranked lists are ultimately merged using a linear combination, which has been specifically tuned to support the annotation of each axis. The fine-tuning of the coefficients is formally reported for each axis-driven search. Compared with PubMed, which is the system used by most curators, the improvement is the following: +231% for Diseases, +236% for Molecular Functions and +3153% for Biological Process when measuring the precision of the top-returned PMID (P0 or mean reciprocal rank). The current search methods significantly improve the search effectiveness of curators for three important curation axes. Further experiments are being performed to extend the curation types, in particular protein-protein interactions, which require specific relationship extraction capabilities. In parallel, user-friendly interfaces powered with a set of JSON web services are currently being implemented into the neXtProt annotation pipeline.Available on: http://babar.unige.ch:8082/neXtA5Database URL: http://babar.unige.ch:8082/neXtA5/fetcher.jsp. © The Author(s) 2016. Published by Oxford University Press.
Sineshaw, Helmneh M; Wu, Xiao-Cheng; Flanders, W Dana; Osarogiagbon, Raymond Uyiosa; Jemal, Ahmedin
2016-06-01
Previous studies reported racial and socioeconomic disparities in receipt of curative-intent surgery for early-stage non-small cell lung cancer (NSCLC) in the United States. We examined variation in receipt of surgery and whether the racial disparity varies by state. Patients in whom stage I or II NSCLC was diagnosed from 2007 to 2011 were identified from 38 state and the District of Columbia population-based cancer registries compiled by the North American Association of Central Cancer Registries. Percentage of patients receiving curative-intent surgery was calculated for each registry. Adjusted risk ratios were generated by using modified Poisson regression to control for sociodemographic (e.g., age, sex, race, insurance) and clinical (e.g., grade, stage) factors. Non-Hispanic (NH) whites and Massachusetts were used as references for comparisons because they had the lowest uninsured rates. In all registries combined, 66.4% of patients with early-stage NSCLC (73,475 of 110,711) received curative-intent surgery. Receipt of curative-intent surgery for early-stage NSCLC varied substantially by state, ranging from 52.2% to 56.1% in Wyoming, Louisiana, and New Mexico to 75.2% to 77.2% in Massachusetts, New Jersey, and Utah. In a multivariable analysis, the likelihood of receiving curative-intent surgery was significantly lower in all but nine states/registries compared with Massachusetts, ranging from 7% lower in California to 25% lower in Wyoming. Receipt of curative-intent surgery for early-stage NSCLC was lower for NH blacks than for NH whites in every state, although statistically significant in Florida and Texas. Receipt of curative-intent surgery for early-stage NSCLC varies substantially across states in the United States, with northeastern states generally showing the highest rates. Further, receipt of treatment appeared to be lower in NH blacks than in NH whites in every state, although statistically significant in Florida and Texas. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Advancing the application of systems thinking in health: why cure crowds out prevention.
Bishai, David; Paina, Ligia; Li, Qingfeng; Peters, David H; Hyder, Adnan A
2014-06-16
This paper presents a system dynamics computer simulation model to illustrate unintended consequences of apparently rational allocations to curative and preventive services. A modeled population is subject to only two diseases. Disease A is a curable disease that can be shortened by curative care. Disease B is an instantly fatal but preventable disease. Curative care workers are financed by public spending and private fees to cure disease A. Non-personal, preventive services are delivered by public health workers supported solely by public spending to prevent disease B. Each type of worker tries to tilt the balance of government spending towards their interests. Their influence on the government is proportional to their accumulated revenue. The model demonstrates effects on lost disability-adjusted life years and costs over the course of several epidemics of each disease. Policy interventions are tested including: i) an outside donor rationally donates extra money to each type of disease precisely in proportion to the size of epidemics of each disease; ii) lobbying is eliminated; iii) fees for personal health services are eliminated; iv) the government continually rebalances the funding for prevention by ring-fencing it to protect it from lobbying.The model exhibits a "spend more get less" equilibrium in which higher revenue by the curative sector is used to influence government allocations away from prevention towards cure. Spending more on curing disease A leads paradoxically to a higher overall disease burden of unprevented cases of disease B. This paradoxical behavior of the model can be stopped by eliminating lobbying, eliminating fees for curative services, and ring-fencing public health funding. We have created an artificial system as a laboratory to gain insights about the trade-offs between curative and preventive health allocations, and the effect of indicative policy interventions. The underlying dynamics of this artificial system resemble features of modern health systems where a self-perpetuating industry has grown up around disease-specific curative programs like HIV/AIDS or malaria. The model shows how the growth of curative care services can crowd both fiscal and policy space for the practice of population level prevention work, requiring dramatic interventions to overcome these trends.
A statistical approach to identify, monitor, and manage incomplete curated data sets.
Howe, Douglas G
2018-04-02
Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to identify data sets that are incompletely curated, as demonstrated using the gene expression data set from ZFIN. This information can help both database resources and data consumers gauge when it may be useful to look further for published data to augment the existing expertly curated information.
neXtA5: accelerating annotation of articles via automated approaches in neXtProt
Mottin, Luc; Gobeill, Julien; Pasche, Emilie; Michel, Pierre-André; Cusin, Isabelle; Gaudet, Pascale; Ruch, Patrick
2016-01-01
The rapid increase in the number of published articles poses a challenge for curated databases to remain up-to-date. To help the scientific community and database curators deal with this issue, we have developed an application, neXtA5, which prioritizes the literature for specific curation requirements. Our system, neXtA5, is a curation service composed of three main elements. The first component is a named-entity recognition module, which annotates MEDLINE over some predefined axes. This report focuses on three axes: Diseases, the Molecular Function and Biological Process sub-ontologies of the Gene Ontology (GO). The automatic annotations are then stored in a local database, BioMed, for each annotation axis. Additional entities such as species and chemical compounds are also identified. The second component is an existing search engine, which retrieves the most relevant MEDLINE records for any given query. The third component uses the content of BioMed to generate an axis-specific ranking, which takes into account the density of named-entities as stored in the Biomed database. The two ranked lists are ultimately merged using a linear combination, which has been specifically tuned to support the annotation of each axis. The fine-tuning of the coefficients is formally reported for each axis-driven search. Compared with PubMed, which is the system used by most curators, the improvement is the following: +231% for Diseases, +236% for Molecular Functions and +3153% for Biological Process when measuring the precision of the top-returned PMID (P0 or mean reciprocal rank). The current search methods significantly improve the search effectiveness of curators for three important curation axes. Further experiments are being performed to extend the curation types, in particular protein–protein interactions, which require specific relationship extraction capabilities. In parallel, user-friendly interfaces powered with a set of JSON web services are currently being implemented into the neXtProt annotation pipeline. Available on: http://babar.unige.ch:8082/neXtA5 Database URL: http://babar.unige.ch:8082/neXtA5/fetcher.jsp PMID:27374119
Data Curation and Visualization for MuSIASEM Analysis of the Nexus
NASA Astrophysics Data System (ADS)
Renner, Ansel
2017-04-01
A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.
The ECOTOXicology Knowledgebase (ECOTOX), is a comprehensive, curated database that summarizes toxicology data fromsingle chemical exposure studies to aquatic life, terrestrial plants, and wildlife. The ECOTOX Knowledgebase currently has curated data from over 47,000 references a...
Ci4SeR--curation interface for semantic resources--evaluation with adverse drug reactions.
Souvignet, Julien; Asfari, Hadyl; Declerck, Gunnar; Lardon, Jérémy; Trombert-Paviot, Béatrice; Jaulent, Marie-Christine; Bousquet, Cédric
2014-01-01
Evaluation and validation have become a crucial problem for the development of semantic resources. We developed Ci4SeR, a Graphical User Interface to optimize the curation work (not taking into account structural aspects), suitable for any type of resource with lightweight description logic. We tested it on OntoADR, an ontology of adverse drug reactions. A single curator has reviewed 326 terms (1020 axioms) in an estimated time of 120 hours (2.71 concepts and 8.5 axioms reviewed per hour) and added 1874 new axioms (15.6 axioms per hour). Compared with previous manual endeavours, the interface allows increasing the speed-rate of reviewed concepts by 68% and axiom addition by 486%. A wider use of Ci4SeR would help semantic resources curation and improve completeness of knowledge modelling.
Text mining for the biocuration workflow
Hirschman, Lynette; Burns, Gully A. P. C; Krallinger, Martin; Arighi, Cecilia; Cohen, K. Bretonnel; Valencia, Alfonso; Wu, Cathy H.; Chatr-Aryamontri, Andrew; Dowell, Karen G.; Huala, Eva; Lourenço, Anália; Nash, Robert; Veuthey, Anne-Lise; Wiegers, Thomas; Winter, Andrew G.
2012-01-01
Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on ‘Text Mining for the BioCuration Workflow’ at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community. PMID:22513129
Text mining for the biocuration workflow.
Hirschman, Lynette; Burns, Gully A P C; Krallinger, Martin; Arighi, Cecilia; Cohen, K Bretonnel; Valencia, Alfonso; Wu, Cathy H; Chatr-Aryamontri, Andrew; Dowell, Karen G; Huala, Eva; Lourenço, Anália; Nash, Robert; Veuthey, Anne-Lise; Wiegers, Thomas; Winter, Andrew G
2012-01-01
Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on 'Text Mining for the BioCuration Workflow' at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community.
Névéol, Aurélie; Wilbur, W John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/
Névéol, Aurélie; Wilbur, W. John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/ PMID:22685160
Favourable prognosis of cystadeno- over adenocarcinoma of the pancreas after curative resection.
Ridder, G J; Maschek, H; Klempnauer, J
1996-06-01
This report details nine patients after curative surgical resection of histologically proven mucinous cystadenocarcinoma of the pancreas and compares the prognosis with ductal adenocarcinomas. Cystadenocarcinomas represented 2.1% (10/ 466) of a total of 466 patients who underwent surgical exploration and 5.5%, of all curatively resected carcinomas of the exocrine pancreas at Hanover Medical School from 1971 to 1994. Forty percent of adenocarcinomas and 90% of cystadenocarcinomas were resectable. A curative R0 resection was possible in all patients with cystadenocarcinoma and 85 % with adenocarcinoma. Six of the patients with cystadenocarcinoma were female and three were male. Their median age was 54 +/- 12 years (range: 44 to 81 years). Four cystic neoplasms were located in the head, one in the head and body, three in the tail, and one in the body and tail of the pancreas. There was no hospital mortality in this group. The prognosis after resection of cystadenocarcinomas was significantly better compared to ductal adenocarcinomas of the pancreas. The Kaplan-Meier survival was 89% vs 52% after 1 year, and 56% vs 13% at 5 years. Our results indicate the favourable prognosis of cystadeno- over ductal adenocarcinomas of the pancreas in a cohort of patients with curative tumour resection.
Standards-based curation of a decade-old digital repository dataset of molecular information.
Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Murray-Rust, Peter; Rzepa, Henry S; Stewart, James J P
2015-01-01
The desirable curation of 158,122 molecular geometries derived from the NCI set of reference molecules together with associated properties computed using the MOPAC semi-empirical quantum mechanical method and originally deposited in 2005 into the Cambridge DSpace repository as a data collection is reported. The procedures involved in the curation included annotation of the original data using new MOPAC methods, updating the syntax of the CML documents used to express the data to ensure schema conformance and adding new metadata describing the entries together with a XML schema transformation to map the metadata schema to that used by the DataCite organisation. We have adopted a granularity model in which a DataCite persistent identifier (DOI) is created for each individual molecule to enable data discovery and data metrics at this level using DataCite tools. We recommend that the future research data management (RDM) of the scientific and chemical data components associated with journal articles (the "supporting information") should be conducted in a manner that facilitates automatic periodic curation. Graphical abstractStandards and metadata-based curation of a decade-old digital repository dataset of molecular information.
Mansouri, K; Grulke, C M; Richard, A M; Judson, R S; Williams, A J
2016-11-01
The increasing availability of large collections of chemical structures and associated experimental data provides an opportunity to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experimental data. Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publicly available PHYSPROP physicochemical properties and environmental fate datasets. The workflow first assembles structure-identity pairs using up to four provided chemical identifiers, including chemical name, CASRNs, SMILES, and MolBlock. Problems detected included errors and mismatches in chemical structure formats, identifiers and various structure validation issues, including hypervalency and stereochemistry descriptions. Subsequently, a machine learning procedure was applied to evaluate the impact of this curation process. The performance of QSAR models built on only the highest-quality subset of the original dataset was compared with the larger curated and corrected dataset. The latter showed statistically improved predictive performance. The final workflow was used to curate the full list of PHYSPROP datasets, and is being made publicly available for further usage and integration by the scientific community.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts.
Ravagli, Carlo; Pognan, Francois; Marc, Philippe
2017-01-01
The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com This software is designed to run on a Java EE application server and store data in a relational database. philippe.marc@novartis.com. © The Author 2016. Published by Oxford University Press.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts
Ravagli, Carlo; Pognan, Francois
2017-01-01
Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099
Rohel, Eric A; Laurent, Paul; Fraaije, Bart A; Cavelier, Nadine; Hollomon, Derek W
2002-03-01
Quantitative PCR and visual monitoring of Mycosphaerella graminicola epidemics were performed to investigate the effect of curative and preventative applications of azoxystrobin in wheat field crops. A non-systemic protectant and a systemic curative fungicide, chlorothalonil and epoxiconazole, respectively, were used as references. PCR diagnosis detected leaf infection by M graminicola 3 weeks before symptom appearance, thereby allowing a clear distinction between curative and preventative treatments. When applied 1 week after the beginning of infection, azoxystrobin curative activity was intermediate between chlorothalonil (low effect) and epoxiconazole. When applied preventatively, none of the fungicides completely prevented leaf infection. There was some indication that azoxystrobin preventative treatments may delay fungal DNA increase more than epoxiconazole at the beginning of leaf infection. Both curative and preventative treatments increased the time lapse between the earliest PCR detection and the measurement of a 10% necrotic leaf area. Azoxystrobin only slightly decreased the speed of necrotic area increase compared with epoxiconazole. Hence, azoxystrobin activity toward M graminicola mainly resides in lengthening the time lapse between the earliest PCR detection and the measurement of a 10% necrotic leaf area. Information generated in this way is useful for optimal positioning of azoxystrobin treatments on M graminicola.
[Curative Effects of Hydroxyurea on the Patients with β-thalassaemia Intermadia].
Huang, Li; Yao, Hong-Xia
2016-06-01
To investigate the clinical features of β-thalassaemia intermediate (TI) patients and the curative effect and side reactions of hydroxyurea therapys. Twenty nine patients with TI were divided into hydroxyurea therapy group and no hydroxyurea therapy group; the curative effect and side reactions in 2 groups were compared; the situation of blood transfusion in the 2 groups was evaluated. In hydroxyurea therapy group, the hemoglobin level increased after treatment for 3 months; the reticulocyte percentage obviously decreased after treatment for 12 months; the serum ferritin had been maintained at a low level; while in no hydroxyurea therapy group, the levels of hemoglobin and reticulocytes were not significantly improved after treatment, the serum ferritin level gradually increased. In hydroxyurea therapy group, 12 cases were out of blood transfusion after treatment for 12 months, effective rate of treatment was 85.71%; while in no hydroxyurea therapy group, the blood transfusion dependency was not improved after treatment. No serious side reactions were found in all the hydroxyurea treated patients. The hydroxyurea shows a better curative effect on TI patients, no serious side reactions occur in all the patients treated with hydroxyurea, but the long-term curative effect and side reactions should be observed continuously.
NASA Technical Reports Server (NTRS)
Shum, Dana; Bugbee, Kaylin
2017-01-01
This talk explains the ongoing metadata curation activities in the Common Metadata Repository. It explores tools that exist today which are useful for building quality metadata and also opens up the floor for discussions on other potentially useful tools.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
2018-03-09
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world. Textpresso Central URL: http://www.textpresso.org/tpc.
Davis, Allan Peter; Johnson, Robin J.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Rosenstein, Michael C.; Wiegers, Thomas C.; Mattingly, Carolyn J.
2012-01-01
The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and manually curate a triad of chemical–gene, chemical–disease and gene–disease interactions. Typically, articles for CTD are selected using a chemical-centric approach by querying PubMed to retrieve a corpus containing the chemical of interest. Although this technique ensures adequate coverage of knowledge about the chemical (i.e. data completeness), it does not necessarily reflect the most current state of all toxicological research in the community at large (i.e. data currency). Keeping databases current with the most recent scientific results, as well as providing a rich historical background from legacy articles, is a challenging process. To address this issue of data currency, CTD designed and tested a journal-centric approach of curation to complement our chemical-centric method. We first identified priority journals based on defined criteria. Next, over 7 weeks, three biocurators reviewed 2425 articles from three consecutive years (2009–2011) of three targeted journals. From this corpus, 1252 articles contained relevant data for CTD and 52 752 interactions were manually curated. Here, we describe our journal selection process, two methods of document delivery for the biocurators and the analysis of the resulting curation metrics, including data currency, and both intra-journal and inter-journal comparisons of research topics. Based on our results, we expect that curation by select journals can (i) be easily incorporated into the curation pipeline to complement our chemical-centric approach; (ii) build content more evenly for chemicals, genes and diseases in CTD (rather than biasing data by chemicals-of-interest); (iii) reflect developing areas in environmental health and (iv) improve overall data currency for chemicals, genes and diseases. Database URL: http://ctdbase.org/ PMID:23221299
Closing the loop: from paper to protein annotation using supervised Gene Ontology classification.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2014-01-01
Gene function curation of the literature with Gene Ontology (GO) concepts is one particularly time-consuming task in genomics, and the help from bioinformatics is highly requested to keep up with the flow of publications. In 2004, the first BioCreative challenge already designed a task of automatic GO concepts assignment from a full text. At this time, results were judged far from reaching the performances required by real curation workflows. In particular, supervised approaches produced the most disappointing results because of lack of training data. Ten years later, the available curation data have massively grown. In 2013, the BioCreative IV GO task revisited the automatic GO assignment task. For this issue, we investigated the power of our supervised classifier, GOCat. GOCat computes similarities between an input text and already curated instances contained in a knowledge base to infer GO concepts. The subtask A consisted in selecting GO evidence sentences for a relevant gene in a full text. For this, we designed a state-of-the-art supervised statistical approach, using a naïve Bayes classifier and the official training set, and obtained fair results. The subtask B consisted in predicting GO concepts from the previous output. For this, we applied GOCat and reached leading results, up to 65% for hierarchical recall in the top 20 outputted concepts. Contrary to previous competitions, machine learning has this time outperformed standard dictionary-based approaches. Thanks to BioCreative IV, we were able to design a complete workflow for curation: given a gene name and a full text, this system is able to select evidence sentences for curation and to deliver highly relevant GO concepts. Contrary to previous competitions, machine learning this time outperformed dictionary-based systems. Observed performances are sufficient for being used in a real semiautomatic curation workflow. GOCat is available at http://eagl.unige.ch/GOCat/. http://eagl.unige.ch/GOCat4FT/. © The Author(s) 2014. Published by Oxford University Press.
EURO-CARES as Roadmap for a European Sample Curation Facility
NASA Astrophysics Data System (ADS)
Brucato, J. R.; Russell, S.; Smith, C.; Hutzler, A.; Meneghin, A.; Aléon, J.; Bennett, A.; Berthoud, L.; Bridges, J.; Debaille, V.; Ferrière, L.; Folco, L.; Foucher, F.; Franchi, I.; Gounelle, M.; Grady, M.; Leuko, S.; Longobardo, A.; Palomba, E.; Pottage, T.; Rettberg, P.; Vrublevskis, J.; Westall, F.; Zipfel, J.; Euro-Cares Team
2018-04-01
EURO-CARES is a three-year multinational project funded under the European Commission Horizon2020 research program to develop a roadmap for a European Extraterrestrial Sample Curation Facility for samples returned from solar system missions.
Sample Transport for a European Sample Curation Facility
NASA Astrophysics Data System (ADS)
Berthoud, L.; Vrublevskis, J. B.; Bennett, A.; Pottage, T.; Bridges, J. C.; Holt, J. M. C.; Dirri, F.; Longobardo, A.; Palomba, E.; Russell, S.; Smith, C.
2018-04-01
This work has looked at the recovery of Mars Sample Return capsule once it arrives on Earth. It covers possible landing sites, planetary protection requirements, and transportation from the landing site to a European Sample Curation Facility.
Situation Model for Situation-Aware Assistance of Dementia Patients in Outdoor Mobility
Yordanova, Kristina; Koldrack, Philipp; Heine, Christina; Henkel, Ron; Martin, Mike; Teipel, Stefan; Kirste, Thomas
2017-01-01
Background: Dementia impairs spatial orientation and route planning, thus often affecting the patient’s ability to move outdoors and maintain social activities. Situation-aware deliberative assistive technology devices (ATD) can substitute impaired cognitive function in order to maintain one’s level of social activity. To build such a system, one needs domain knowledge about the patient’s situation and needs. We call this collection of knowledge situation model. Objective: To construct a situation model for the outdoor mobility of people with dementia (PwD). The model serves two purposes: 1) as a knowledge base from which to build an ATD describing the mobility of PwD; and 2) as a codebook for the annotation of the recorded behavior. Methods: We perform systematic knowledge elicitation to obtain the relevant knowledge. The OBO Edit tool is used for implementing and validating the situation model. The model is evaluated by using it as a codebook for annotating the behavior of PwD during a mobility study and interrater agreement is computed. In addition, clinical experts perform manual evaluation and curation of the model. Results: The situation model consists of 101 concepts with 11 relation types between them. The results from the annotation showed substantial overlapping between two annotators (Cohen’s kappa of 0.61). Conclusion: The situation model is a first attempt to systematically collect and organize information related to the outdoor mobility of PwD for the purposes of situation-aware assistance. The model is the base for building an ATD able to provide situation-aware assistance and to potentially improve the quality of life of PwD. PMID:29060937
I A Yagub, Abdallah
2014-05-01
North Darfur State has been affected by conflict since 2003 and the government has not been able to provide adequate curative health services to the people. The government has come to rely on Non-Governmental Organizations (NGOs) to provide curative health services. This study was conducted to examine the existing collaboration between government and NGOs in curative health service delivery in North Darfur State, and to identify the challenges that affect their collaboration. Documentary data were collected from government offices and medical organizations. Primary data were obtained through interviews with government and NGOs representatives. The interviews were conducted with (1) expatriates working for international NGOs (N=15) and (2), health professionals and administrators working in the health sector (N= 45). The collaboration between the government and NGOs has been very weak because of security issues and lack of trust. The NGOs collaborate by providing human and financial resources, material and equipment, and communication facilities. The NGOs supply 70% of curative health services, and contribute 52.9% of the health budget in North Darfur State. The NGOs have employed 1 390 health personnel, established 44 health centres and manage and support 83 health facilities across the State. The NGOs have played a positive role in collaborating with the government in North Darfur State in delivering curative health services, while government's role has been negative. The problem that faces the government in future is how health facilities will be run should a peaceful settlement be reached and NGOs leave the region.
I A YAGUB, Abdallah
2014-01-01
Abstract Background North Darfur State has been affected by conflict since 2003 and the government has not been able to provide adequate curative health services to the people. The government has come to rely on Non-Governmental Organizations (NGOs) to provide curative health services. This study was conducted to examine the existing collaboration between government and NGOs in curative health service delivery in North Darfur State, and to identify the challenges that affect their collaboration. Methods Documentary data were collected from government offices and medical organizations. Primary data were obtained through interviews with government and NGOs representatives. The interviews were conducted with (1) expatriates working for international NGOs (N=15) and (2), health professionals and administrators working in the health sector (N= 45). Results The collaboration between the government and NGOs has been very weak because of security issues and lack of trust. The NGOs collaborate by providing human and financial resources, material and equipment, and communication facilities. The NGOs supply 70% of curative health services, and contribute 52.9% of the health budget in North Darfur State. The NGOs have employed 1 390 health personnel, established 44 health centres and manage and support 83 health facilities across the State. Conclusion The NGOs have played a positive role in collaborating with the government in North Darfur State in delivering curative health services, while government’s role has been negative. The problem that faces the government in future is how health facilities will be run should a peaceful settlement be reached and NGOs leave the region. PMID:26056656
Endoscopic submucosal dissection for early esophageal neoplasms using the stag beetle knife
Kuwai, Toshio; Yamaguchi, Toshiki; Imagawa, Hiroki; Miura, Ryoichi; Sumida, Yuki; Takasago, Takeshi; Miyasako, Yuki; Nishimura, Tomoyuki; Iio, Sumio; Yamaguchi, Atsushi; Kouno, Hirotaka; Kohno, Hiroshi; Ishaq, Sauid
2018-01-01
AIM To determine short- and long-term outcomes of endoscopic submucosal dissection (ESD) using the stag beetle (SB) knife, a scissor-shaped device. METHODS Seventy consecutive patients with 96 early esophageal neoplasms, who underwent ESD using a SB knife at Kure Medical Center and Chugoku Cancer Center, Japan, between April 2010 and August 2016, were retrospectively evaluated. Clinicopathological characteristics of lesions and procedural adverse events were assessed. Therapeutic success was evaluated on the basis of en bloc, histologically complete, and curative or non-curative resection rates. Overall and tumor-specific survival, local or distant recurrence, and 3- and 5-year cumulative overall metachronous cancer rates were also assessed. RESULTS Eligible patients had dysplasia/intraepithelial neoplasia (22%) or early cancers (squamous cell carcinoma, 78%). The median procedural time was 60 min and on average, the lesions measured 24 mm in diameter, yielding 33-mm tissue defects. The en bloc resection rate was 100%, with 95% and 81% of dissections deemed histologically complete and curative, respectively. All procedures were completed without accidental incisions/perforations or delayed bleeding. During follow-up (mean, 35 ± 23 mo), no local recurrences or metastases were observed. The 3- and 5-year survival rates were 83% and 70%, respectively, with corresponding rates of 85% and 75% for curative resections and 74% and 49% for non-curative resections. The 3- and 5-year cumulative rates of metachronous cancer in the patients with curative resections were 14% and 26%, respectively. CONCLUSION ESD procedures using the SB knife are feasible, safe, and effective for treating early esophageal neoplasms, yielding favorable short- and long-term outcomes. PMID:29686470
Endoscopic submucosal dissection for early esophageal neoplasms using the stag beetle knife.
Kuwai, Toshio; Yamaguchi, Toshiki; Imagawa, Hiroki; Miura, Ryoichi; Sumida, Yuki; Takasago, Takeshi; Miyasako, Yuki; Nishimura, Tomoyuki; Iio, Sumio; Yamaguchi, Atsushi; Kouno, Hirotaka; Kohno, Hiroshi; Ishaq, Sauid
2018-04-21
To determine short- and long-term outcomes of endoscopic submucosal dissection (ESD) using the stag beetle (SB) knife, a scissor-shaped device. Seventy consecutive patients with 96 early esophageal neoplasms, who underwent ESD using a SB knife at Kure Medical Center and Chugoku Cancer Center, Japan, between April 2010 and August 2016, were retrospectively evaluated. Clinicopathological characteristics of lesions and procedural adverse events were assessed. Therapeutic success was evaluated on the basis of en bloc , histologically complete, and curative or non-curative resection rates. Overall and tumor-specific survival, local or distant recurrence, and 3- and 5-year cumulative overall metachronous cancer rates were also assessed. Eligible patients had dysplasia/intraepithelial neoplasia (22%) or early cancers (squamous cell carcinoma, 78%). The median procedural time was 60 min and on average, the lesions measured 24 mm in diameter, yielding 33-mm tissue defects. The en bloc resection rate was 100%, with 95% and 81% of dissections deemed histologically complete and curative, respectively. All procedures were completed without accidental incisions/perforations or delayed bleeding. During follow-up (mean, 35 ± 23 mo), no local recurrences or metastases were observed. The 3- and 5-year survival rates were 83% and 70%, respectively, with corresponding rates of 85% and 75% for curative resections and 74% and 49% for non-curative resections. The 3- and 5-year cumulative rates of metachronous cancer in the patients with curative resections were 14% and 26%, respectively. ESD procedures using the SB knife are feasible, safe, and effective for treating early esophageal neoplasms, yielding favorable short- and long-term outcomes.
Sequencing Data Discovery and Integration for Earth System Science with MetaSeek
NASA Astrophysics Data System (ADS)
Hoarfrost, A.; Brown, N.; Arnosti, C.
2017-12-01
Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.
[Thoracoscopic diagnosis and treatment of postoperative residual cavities].
Ioffe, D Ts; Dashiev, V A; Amanov, S A
1987-03-01
Investigations performed in 41 patients with postoperative residual cavities after surgical interventions of different volume have shown high value of thoracoscopy as an additional diagnostic and curative method. The endoscopy findings determinate further curative tactics--surgery or conservative therapy.
Advancing the application of systems thinking in health: why cure crowds out prevention
2014-01-01
Introduction This paper presents a system dynamics computer simulation model to illustrate unintended consequences of apparently rational allocations to curative and preventive services. Methods A modeled population is subject to only two diseases. Disease A is a curable disease that can be shortened by curative care. Disease B is an instantly fatal but preventable disease. Curative care workers are financed by public spending and private fees to cure disease A. Non-personal, preventive services are delivered by public health workers supported solely by public spending to prevent disease B. Each type of worker tries to tilt the balance of government spending towards their interests. Their influence on the government is proportional to their accumulated revenue. Results The model demonstrates effects on lost disability-adjusted life years and costs over the course of several epidemics of each disease. Policy interventions are tested including: i) an outside donor rationally donates extra money to each type of disease precisely in proportion to the size of epidemics of each disease; ii) lobbying is eliminated; iii) fees for personal health services are eliminated; iv) the government continually rebalances the funding for prevention by ring-fencing it to protect it from lobbying. The model exhibits a “spend more get less” equilibrium in which higher revenue by the curative sector is used to influence government allocations away from prevention towards cure. Spending more on curing disease A leads paradoxically to a higher overall disease burden of unprevented cases of disease B. This paradoxical behavior of the model can be stopped by eliminating lobbying, eliminating fees for curative services, and ring-fencing public health funding. Conclusions We have created an artificial system as a laboratory to gain insights about the trade-offs between curative and preventive health allocations, and the effect of indicative policy interventions. The underlying dynamics of this artificial system resemble features of modern health systems where a self-perpetuating industry has grown up around disease-specific curative programs like HIV/AIDS or malaria. The model shows how the growth of curative care services can crowd both fiscal and policy space for the practice of population level prevention work, requiring dramatic interventions to overcome these trends. PMID:24935344
DiMeX: A Text Mining System for Mutation-Disease Association Extraction.
Mahmood, A S M Ashique; Wu, Tsung-Jung; Mazumder, Raja; Vijay-Shanker, K
2016-01-01
The number of published articles describing associations between mutations and diseases is increasing at a fast pace. There is a pressing need to gather such mutation-disease associations into public knowledge bases, but manual curation slows down the growth of such databases. We have addressed this problem by developing a text-mining system (DiMeX) to extract mutation to disease associations from publication abstracts. DiMeX consists of a series of natural language processing modules that preprocess input text and apply syntactic and semantic patterns to extract mutation-disease associations. DiMeX achieves high precision and recall with F-scores of 0.88, 0.91 and 0.89 when evaluated on three different datasets for mutation-disease associations. DiMeX includes a separate component that extracts mutation mentions in text and associates them with genes. This component has been also evaluated on different datasets and shown to achieve state-of-the-art performance. The results indicate that our system outperforms the existing mutation-disease association tools, addressing the low precision problems suffered by most approaches. DiMeX was applied on a large set of abstracts from Medline to extract mutation-disease associations, as well as other relevant information including patient/cohort size and population data. The results are stored in a database that can be queried and downloaded at http://biotm.cis.udel.edu/dimex/. We conclude that this high-throughput text-mining approach has the potential to significantly assist researchers and curators to enrich mutation databases.
Guadagnolo, B Ashleigh; Boylan, Amy; Sargent, Michele; Koop, David; Brunette, Deb; Kanekar, Shalini; Shortbull, Vanessa; Molloy, Kevin; Petereit, Daniel G
2011-06-15
A study was undertaken to assess patient navigation utilization and its impact on treatment interruptions and clinical trial enrollment among American Indian cancer patients. Between February 2004 and September 2009, 332 American Indian cancer patients received patient navigation services throughout cancer treatment. The patient navigation program provided culturally competent navigators to assist patients with navigating cancer therapy, obtaining medications, insurance issues, communicating with medical providers, and travel and lodging logistics. Data on utilization and trial enrollment were prospectively collected. Data for a historical control group of 70 American Indian patients who did not receive patient navigation services were used to compare treatment interruptions among those undergoing patient navigation during curative radiation therapy (subgroup of 123 patients). The median number of contacts with a navigator was 12 (range, 1-119). The median time spent with the navigator at first contact was 40 minutes (range, 10-250 minutes), and it was 15 minutes for subsequent contacts. Patients treated with radiation therapy with curative intent who underwent patient navigation had fewer days of treatment interruption (mean, 1.7 days; 95% confidence interval [CI], 1.1-2.2 days) than historical controls who did not receive patient navigation services (mean, 4.9 days; 95% CI, 2.9-6.9 days). Of the 332 patients, 72 (22%; 95% CI, 17%-26%) were enrolled on a clinical treatment trial or cancer control protocol. Patient navigation was associated with fewer treatment interruptions and relatively high rates of clinical trial enrollment among American Indian cancer patients compared with national reports. Copyright © 2010 American Cancer Society.
DiMeX: A Text Mining System for Mutation-Disease Association Extraction
Mahmood, A. S. M. Ashique; Wu, Tsung-Jung; Mazumder, Raja; Vijay-Shanker, K.
2016-01-01
The number of published articles describing associations between mutations and diseases is increasing at a fast pace. There is a pressing need to gather such mutation-disease associations into public knowledge bases, but manual curation slows down the growth of such databases. We have addressed this problem by developing a text-mining system (DiMeX) to extract mutation to disease associations from publication abstracts. DiMeX consists of a series of natural language processing modules that preprocess input text and apply syntactic and semantic patterns to extract mutation-disease associations. DiMeX achieves high precision and recall with F-scores of 0.88, 0.91 and 0.89 when evaluated on three different datasets for mutation-disease associations. DiMeX includes a separate component that extracts mutation mentions in text and associates them with genes. This component has been also evaluated on different datasets and shown to achieve state-of-the-art performance. The results indicate that our system outperforms the existing mutation-disease association tools, addressing the low precision problems suffered by most approaches. DiMeX was applied on a large set of abstracts from Medline to extract mutation-disease associations, as well as other relevant information including patient/cohort size and population data. The results are stored in a database that can be queried and downloaded at http://biotm.cis.udel.edu/dimex/. We conclude that this high-throughput text-mining approach has the potential to significantly assist researchers and curators to enrich mutation databases. PMID:27073839
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-21
... Science and Industry at the address below by August 22, 2011. ADDRESSES: Lori Erickson, Curator, Oregon... the human remains should contact Lori Erickson, Curator, Oregon Museum of Science and Industry, 1945...
Planning Related to the Curation and Processing of Returned Martian Samples
NASA Astrophysics Data System (ADS)
McCubbin, F. M.; Harrington, A. D.
2018-04-01
Many of the planning activities in the NASA Astromaterials Acquisition and Curation Office at JSC are centered around Mars Sample Return. The importance of contamination knowledge and the benefits of a mobile/modular receiving facility are discussed.
[Biochemical failure after curative treatment for localized prostate cancer].
Zouhair, Abderrahim; Jichlinski, Patrice; Mirimanoff, René-Olivier
2005-12-07
Biochemical failure after curative treatment for localized prostate cancer is frequent. The diagnosis of biochemical failure is clear when PSA levels rise after radical prostatectomy, but may be more difficult after external beam radiation therapy. The main difficulty once biochemical failure is diagnosed is to distinguish between local and distant failure, given the low sensitivity of standard work-up exams. Metabolic imaging techniques currently under evaluation may in the future help us to localize the site of failures. There are several therapeutic options depending on the initial curative treatment, each with morbidity risks that should be considered in multidisciplinary decision-making.
Momani, Tha'er G; Hathaway, Donna K; Mandrell, Belinda N
2016-01-01
Health-related quality of life (HRQoL) is an important measure to evaluate a child's reported treatment experience. Although there are numerous studies of HRQoL in children undergoing curative cancer treatment, there is limited literature on factors that influence this. To review published studies that describe the HRQoL and associated factors in children undergoing curative cancer treatment. Full-text publications in English from January 2005 to March 2013 were searched in PubMed, PsychINFO, and CINAHL for children ≤18 years of age undergoing curative cancer treatment. HRQoL-associated factors were categorized as cancer diagnosis, treatment, child, family, and community. Twenty-six studies met the inclusion criteria. The most frequently used generic and cancer-specific instruments were PedsQL (Pediatric Quality of Life Inventory) Generic and PedsQL Cancer, respectively. Cancer diagnosis and treatment were the most frequently identified variables; fewer studies measured family and community domains. Gender, treatment intensity, type of cancer treatments, time in treatment, and cancer diagnosis were correlated with HRQoL. Our study highlights the need to develop interventions based on diagnosis and treatment regimen to improve the HRQoL in children undergoing curative cancer treatment. © 2015 by Association of Pediatric Hematology/Oncology Nurses.
Curation of food-relevant chemicals in ToxCast.
Karmaus, Agnes L; Trautman, Thomas D; Krishan, Mansi; Filer, Dayne L; Fix, Laurel A
2017-05-01
High-throughput in vitro assays and exposure prediction efforts are paving the way for modeling chemical risk; however, the utility of such extensive datasets can be limited or misleading when annotation fails to capture current chemical usage. To address this data gap and provide context for food-use in the United States (US), manual curation of food-relevant chemicals in ToxCast was conducted. Chemicals were categorized into three food-use categories: (1) direct food additives, (2) indirect food additives, or (3) pesticide residues. Manual curation resulted in 30% of chemicals having new annotation as well as the removal of 319 chemicals, most due to cancellation or only foreign usage. These results highlight that manual curation of chemical use information provided significant insight affecting the overall inventory and chemical categorization. In total, 1211 chemicals were confirmed as current day food-use in the US by manual curation; 1154 of these chemicals were also identified as food-related in the globally sourced chemical use information from Chemical/Product Categories database (CPCat). The refined list of food-use chemicals and the sources highlighted for compiling annotated information required to confirm food-use are valuable resources for providing needed context when evaluating large-scale inventories such as ToxCast. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Simone, Giuseppe; Tuderti, Gabriele; Misuraca, Leonardo; Anceschi, Umberto; Ferriero, Mariaconsiglia; Minisola, Francesco; Guaglianone, Salvatore; Gallucci, Michele
2018-04-17
In this study, we compared perioperative and oncologic outcomes of patients treated with either open or robot-assisted radical cystectomy and intracorporeal neobladder at a tertiary care center. The institutional prospective bladder cancer database was queried for "cystectomy with curative intent" and "neobladder". All patients underwent robot-assisted radical cystectomy and intracorporeal neobladder or open radical cystectomy and orthotopic neobladder for high-grade non-muscle invasive bladder cancer or muscle invasive bladder cancer with a follow-up length ≥2 years were included. A 1:1 propensity score matching analysis was used. Kaplan-Meier method was performed to compare oncologic outcomes of selected cohorts. Survival rates were computed at 1,2,3 and 4 years after surgery and the log rank test was applied to assess statistical significance between the matched groups. Overall, 363 patients (299 open and 64 robotic) were included. Open radical cystectomy patients were more frequently male (p = 0.08), with higher pT stages (p = 0.003), lower incidence of urothelial histologies (p = 0.05) and lesser adoption of neoadjuvant chemotherapy (<0.001). After applying the propensity score matching, 64 robot-assisted radical cystectomy patients were matched with 46 open radical cystectomy cases (all p ≥ 0.22). Open cohort showed a higher rate of perioperative overall complications (91.3% vs 42.2%, p 0.001). At Kaplan-Meier analysis robotic and open cohorts displayed comparable disease-free survival (log-rank p = 0.746), cancer-specific survival (p = 0.753) and overall-survival rates (p = 0.909). Robot-assisted radical cystectomy and intracorporeal neobladder provides comparable oncologic outcomes of open radical cystectomy and orthotopic neobladder at intermediate term survival analysis. Copyright © 2018 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.
NASA Astrophysics Data System (ADS)
Hedstrom, M. L.; Kumar, P.; Myers, J.; Plale, B. A.
2012-12-01
In data science, the most common sequence of steps for data curation are to 1) curate data, 2) enable data discovery, and 3) provide for data reuse. The Sustainable Environments - Actionable Data (SEAD) project, funded through NSF's DataNet program, is creating an environment for sustainability scientists to discover data first, reuse data next, and curate data though an on-going process that we call Active and Social Curation. For active curation we are developing tools and services that support data discovery, data management, and data enhancement for the community while the data is still being used actively for research. We are creating an Active Content Repository, using drop box, semantic web technologies, and a Flickr-like interface for researchers to "drop" data into a repository where it will be replicated and minimally discoverable. For social curation, we are deploying a social networking tool, VIVO, which will allow researchers to discover data-publications-people (e.g. expertise) through a route that can start at any of those entry points. The other dimension of social curation is developing mechanisms to open data for community input, for example, using ranking and commenting mechanisms for data sets and a community-sourcing capability to add tags, clean up and validate data sets. SEAD's strategies and services are aimed at the sustainability science community, which faces numerous challenges including discovery of useful data, cleaning noisy observational data, synthesizing data of different types, defining appropriate models, managing and preserving their research data, and conveying holistic results to colleagues, students, decision makers, and the public. Sustainability researchers make significant use of centrally managed data from satellites and national sensor networks, national scientific and statistical agencies, and data archives. At the same time, locally collected data and custom derived data products that combine observations and measurements from local, national, and global sources are critical resources that have disproportionately high value relative to their size. Sustainability science includes a diverse and growing community of domain scientists, policy makers, private sector investors, green manufacturers, citizen scientists, and informed consumers. These communities need actionable data in order to assess the impacts of alternate scenarios, evaluate the cost-benefit tradeoffs of different solutions, and defend their recommendations and decisions. SEAD's goal is to extend its services to other communities in the "long tail" that may benefit from new approaches to infrastructure development which take into account the social and economic characteristics of diverse and dispersed data producers and consumers. For example, one barrier to data reuse is the difficulty of discovering data that might be valuable for a particular study, model, or decision. Making data minimally discoverable saves the community time expended on futile searches and creates a market, of sorts, for the data. Creating very low barriers to entry to a network where data can be discovered and acted upon vastly reduces this disincentive to sharing data. SEAD's approach allows communities to make small incremental improvements in data curation based on their own priorities and needs.
Making Metadata Better with CMR and MMT
NASA Technical Reports Server (NTRS)
Gilman, Jason Arthur; Shum, Dana
2016-01-01
Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.
Davis, Allan Peter; Wiegers, Thomas C.; Roberts, Phoebe M.; King, Benjamin L.; Lay, Jean M.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J.; Enayetallah, Ahmed E.; Mattingly, Carolyn J.
2013-01-01
Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88 629 articles relating over 1 200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 2 54 173 toxicogenomic interactions (1 52 173 chemical–disease, 58 572 chemical–gene, 5 345 gene–disease and 38 083 phenotype interactions). All chemical–gene–disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer’s text-mining process to collate the articles, and CTD’s curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug–disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades’ worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/ PMID:24288140
Davis, Allan Peter; Wiegers, Thomas C; Roberts, Phoebe M; King, Benjamin L; Lay, Jean M; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J; Enayetallah, Ahmed E; Mattingly, Carolyn J
2013-01-01
Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88,629 articles relating over 1,200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 254,173 toxicogenomic interactions (152,173 chemical-disease, 58,572 chemical-gene, 5,345 gene-disease and 38,083 phenotype interactions). All chemical-gene-disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer's text-mining process to collate the articles, and CTD's curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug-disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades' worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/
Text mining and expert curation to develop a database on psychiatric diseases and their genes
Gutiérrez-Sacristán, Alba; Bravo, Àlex; Portero-Tresserra, Marta; Valverde, Olga; Armario, Antonio; Blanco-Gandía, M.C.; Farré, Adriana; Fernández-Ibarrondo, Lierni; Fonseca, Francina; Giraldo, Jesús; Leis, Angela; Mané, Anna; Mayer, M.A.; Montagud-Romero, Sandra; Nadal, Roser; Ortiz, Jordi; Pavon, Francisco Javier; Perez, Ezequiel Jesús; Rodríguez-Arias, Marta; Serrano, Antonia; Torrens, Marta; Warnault, Vincent; Sanz, Ferran
2017-01-01
Abstract Psychiatric disorders constitute one of the main causes of disability worldwide. During the past years, considerable research has been conducted on the genetic architecture of such diseases, although little understanding of their etiology has been achieved. The difficulty to access up-to-date, relevant genotype-phenotype information has hampered the application of this wealth of knowledge to translational research and clinical practice in order to improve diagnosis and treatment of psychiatric patients. PsyGeNET (http://www.psygenet.org/) has been developed with the aim of supporting research on the genetic architecture of psychiatric diseases, by providing integrated and structured accessibility to their genotype–phenotype association data, together with analysis and visualization tools. In this article, we describe the protocol developed for the sustainable update of this knowledge resource. It includes the recruitment of a team of domain experts in order to perform the curation of the data extracted by text mining. Annotation guidelines and a web-based annotation tool were developed to support the curators’ tasks. A curation workflow was designed including a pilot phase and two rounds of curation and analysis phases. Negative evidence from the literature on gene–disease associations (GDAs) was taken into account in the curation process. We report the results of the application of this workflow to the curation of GDAs for PsyGeNET, including the analysis of the inter-annotator agreement and suggest this model as a suitable approach for the sustainable development and update of knowledge resources. Database URL: http://www.psygenet.org PsyGeNET corpus: http://www.psygenet.org/ds/PsyGeNET/results/psygenetCorpus.tar PMID:29220439
Entomopathogen ID: a curated sequence resource for entomopathogenic fungi
USDA-ARS?s Scientific Manuscript database
We report the development of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal is to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identify unknown entomopathogenic fungi...
Cognitive Curations of Collaborative Curricula
ERIC Educational Resources Information Center
Ackerman, Amy S.
2015-01-01
Assuming the role of learning curators, 22 graduate students (in-service teachers) addressed authentic problems (challenges) within their respective classrooms by selecting digital tools as part of implementation of interdisciplinary lesson plans. Students focused on formative assessment tools as a means to gather evidence to make improvements in…
MaizeGDB: New tools and resource
USDA-ARS?s Scientific Manuscript database
MaizeGDB, the USDA-ARS genetics and genomics database, is a highly curated, community-oriented informatics service to researchers focused on the crop plant and model organism Zea mays. MaizeGDB facilitates maize research by curating, integrating, and maintaining a database that serves as the central...
Teacher Training in Curative Education.
ERIC Educational Resources Information Center
Juul, Kristen D.; Maier, Manfred
1992-01-01
This article considers the application of the philosophical and educational principles of Rudolf Steiner, called "anthroposophy," to the training of teachers and curative educators in the Waldorf schools. Special emphasis is on the Camphill movement which focuses on therapeutic schools and communities for children with special needs. (DB)
[Curative effect of ozone hydrotherapy for pemphigus].
Jiang, Fuqiong; Deng, Danqi; Li, Xiaolan; Wang, Wenfang; Xie, Hong; Wu, Yongzhuo; Luan, Chunyan; Yang, Binbin
2018-02-28
To determine clinical curative effects of ozone therapy for pemphigus vulgaris. Methods: Ozone hydrotherapy was used as an aid treatment for 32 patients with pemphigus vulgaris. The hydropathic compression of potassium permanganate solution for 34 patients with pemphigus vulgaris served as a control. The main treatment for both groups were glucocorticoids and immune inhibitors. The lesions of patients, bacterial infection, usage of antibiotics, patient's satisfaction, and clinical curative effect were evaluated in the 2 groups. Results: There was no significant difference in the curative effect and the average length of staying at hospital between the 2 groups (P>0.05). But rate for the usage of antibiotics was significantly reduced in the group of ozone hydrotherapy (P=0.039). The patients were more satisfied in using ozone hydrotherapy than the potassium permanganate solution after 7-day therapy (P>0.05). Conclusion: Ozone hydrotherapy is a safe and effective aid method for pemphigus vulgaris. It can reduce the usage of antibiotics.
Outcomes of the 'Data Curation for Geobiology at Yellowstone National Park' Workshop
NASA Astrophysics Data System (ADS)
Thomer, A.; Palmer, C. L.; Fouke, B. W.; Rodman, A.; Choudhury, G. S.; Baker, K. S.; Asangba, A. E.; Wickett, K.; DiLauro, T.; Varvel, V.
2013-12-01
The continuing proliferation of geological and biological data generated at scientifically significant sites (such as hot springs, coral reefs, volcanic fields and other unique, data-rich locales) has created a clear need for the curation and active management of these data. However, there has been little exploration of what these curation processes and policies would entail. To that end, the Site-Based Data Curation (SBDC) project is developing a framework of guidelines and processes for the curation of research data generated at scientifically significant sites. A workshop was held in April 2013 at Yellowstone National Park (YNP) to gather input from scientists and stakeholders. Workshop participants included nine researchers actively conducting geobiology research at YNP, and seven YNP representatives, including permitting staff and information professionals from the YNP research library and archive. Researchers came from a range of research areas -- geology, molecular and microbial biology, ecology, environmental engineering, and science education. Through group discussions, breakout sessions and hands-on activities, we sought to generate policy recommendations and curation guidelines for the collection, representation, sharing and quality control of geobiological datasets. We report on key themes that emerged from workshop discussions, including: - participants' broad conceptions of the long-term usefulness, reusability and value of data. - the benefits of aggregating site-specific data in general, and geobiological data in particular. - the importance of capturing a dataset's originating context, and the potential usefulness of photographs as a reliable and easy way of documenting context. - researchers' and resource managers' overlapping priorities with regards to 'big picture' data collection and management in the long-term. Overall, we found that workshop participants were enthusiastic and optimistic about future collaboration and development of community approaches to data sharing. We hope to continue discussion of geobiology data curation challenges and potential strategies at AGU. Outcomes from the workshop are guiding next steps in the SBDC project, led by investigators at the Center for Informatics Research in Science and Scholarship and Institute for Genomic Biology at the University of Illinois, in collaboration with partners at Johns Hopkins University and YNP.
Swaan, Corien M; Öry, Alexander V; Schol, Lianne G C; Jacobi, André; Richardus, Jan Hendrik; Timen, Aura
During the Ebola outbreak in West Africa in 2014-2015, close cooperation between the curative sector and the public health sector in the Netherlands was necessary for timely identification, referral, and investigation of patients with suspected Ebola virus disease (EVD). In this study, we evaluated experiences in preparedness among stakeholders of both curative and public health sectors to formulate recommendations for optimizing preparedness protocols. Timeliness of referred patients with suspected EVD was used as indicator for preparedness. In focus group sessions and semistructured interviews, experiences of curative and public health stakeholders about the regional and national process of preparedness and response were listed. Timeliness recordings of all referred patients with suspected EVD (13) were collected from first date of illness until arrival in the referral academic hospital. Ebola preparedness was considered extensive compared with the risk of an actual patient, however necessary. Regional coordination varied between regions. More standardization of regional preparation and operational guidelines was requested, as well as nationally standardized contingency criteria, and the National Centre for Infectious Disease Control was expected to coordinate the development of these guidelines. For the timeliness of referred patients with suspected EVD, the median delay between first date of illness until triage was 2.0 days (range: 0-10 days), and between triage and arrival in the referral hospital, it was 5.0 hours (range: 2-7.5 hours). In none of these patients Ebola infection was confirmed. Coordination between the public health sector and the curative sector needs improvement to reduce delay in patient management in emerging infectious diseases. Standardization of preparedness and response practices, through guidelines for institutional preparedness and blueprints for regional and national coordination, is necessary, as preparedness for emerging infectious diseases needs a multidisciplinary approach overarching both the public health sector and the curative sector. In the Netherlands a national platform for preparedness is established, in which both the curative sector and public health sector participate, in order to implement the outcomes of this study.
ITEP: an integrated toolkit for exploration of microbial pan-genomes.
Benedict, Matthew N; Henriksen, James R; Metcalf, William W; Whitaker, Rachel J; Price, Nathan D
2014-01-03
Comparative genomics is a powerful approach for studying variation in physiological traits as well as the evolution and ecology of microorganisms. Recent technological advances have enabled sequencing large numbers of related genomes in a single project, requiring computational tools for their integrated analysis. In particular, accurate annotations and identification of gene presence and absence are critical for understanding and modeling the cellular physiology of newly sequenced genomes. Although many tools are available to compare the gene contents of related genomes, new tools are necessary to enable close examination and curation of protein families from large numbers of closely related organisms, to integrate curation with the analysis of gain and loss, and to generate metabolic networks linking the annotations to observed phenotypes. We have developed ITEP, an Integrated Toolkit for Exploration of microbial Pan-genomes, to curate protein families, compute similarities to externally-defined domains, analyze gene gain and loss, and generate draft metabolic networks from one or more curated reference network reconstructions in groups of related microbial species among which the combination of core and variable genes constitute the their "pan-genomes". The ITEP toolkit consists of: (1) a series of modular command-line scripts for identification, comparison, curation, and analysis of protein families and their distribution across many genomes; (2) a set of Python libraries for programmatic access to the same data; and (3) pre-packaged scripts to perform common analysis workflows on a collection of genomes. ITEP's capabilities include de novo protein family prediction, ortholog detection, analysis of functional domains, identification of core and variable genes and gene regions, sequence alignments and tree generation, annotation curation, and the integration of cross-genome analysis and metabolic networks for study of metabolic network evolution. ITEP is a powerful, flexible toolkit for generation and curation of protein families. ITEP's modular design allows for straightforward extension as analysis methods and tools evolve. By integrating comparative genomics with the development of draft metabolic networks, ITEP harnesses the power of comparative genomics to build confidence in links between genotype and phenotype and helps disambiguate gene annotations when they are evaluated in both evolutionary and metabolic network contexts.
Manasa, Justen; Lessells, Richard; Rossouw, Theresa; Naidu, Kevindra; Van Vuuren, Cloete; Goedhals, Dominique; van Zyl, Gert; Bester, Armand; Skingsley, Andrew; Stott, Katharine; Danaviah, Siva; Chetty, Terusha; Singh, Lavanya; Moodley, Pravi; Iwuji, Collins; McGrath, Nuala; Seebregts, Christopher J.; de Oliveira, Tulio
2014-01-01
Abstract Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net. Database URL: http://www.bioafrica.net/regadb/ PMID:24504151
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, Yolanda D., E-mail: ydt2@uw.edu; Chen, Yu-Hui; Catalano, Paul J.
Purpose: To evaluate the response rate (RR) and time to local recurrence (TTLR) among patients who received salvage radiation therapy for relapsed or refractory aggressive non-Hodgkin lymphoma (NHL) and investigate whether RR and TTLR differed according to disease characteristics. Methods and Materials: A retrospective review was performed for all patients who completed a course of salvage radiation therapy between January 2001 and May 2011 at Brigham and Women's Hospital/Dana-Farber Cancer Institute. Separate analyses were conducted for patients treated with palliative and curative intent. Predictors of RR for each subgroup were assessed using a generalized estimating equation model. For patients treatedmore » with curative intent, local control (LC) and progression-free survival were estimated with the Kaplan-Meier method; predictors for TTLR were evaluated using a Cox proportional hazards regression model. Results: Salvage radiation therapy was used to treat 110 patients to 121 sites (76 curative, 45 palliative). Salvage radiation therapy was given as part of consolidation in 18% of patients treated with curative intent. Median dose was 37.8 Gy, with 58% and 36% of curative and palliative patients, respectively, receiving 39.6 Gy or higher. The RR was high (86% curative, 84% palliative). With a median follow-up of 4.8 years among living patients, 5-year LC and progression-free survival for curative patients were 66% and 34%, respectively. Refractory disease (hazard ratio 3.3; P=.024) and lack of response to initial chemotherapy (hazard ratio 4.3; P=.007) but not dose (P=.93) were associated with shorter TTLR. Despite doses of 39.6 Gy or higher, 2-year LC was only 61% for definitive patients with refractory disease or disease that did not respond to initial chemotherapy. Conclusions: Relapsed or refractory aggressive NHL is responsive to salvage radiation therapy, and durable LC can be achieved in some cases. However, refractory disease is associated with a shorter TTLR, suggesting that radiation dose escalation, addition of radiosensitizers, or a combination of both may be indicated in these patients.« less
Advances in Astromaterials Curation: Supporting Future Sample Return Missions
NASA Technical Reports Server (NTRS)
Evans, C. A.; Zeigler, R. A.; Fries, M. D..; Righter, K.; Allton, J. H.; Zolensky, M. E.; Calaway, M. J.; Bell, M. S.
2015-01-01
NASA's Astromaterials, curated at the Johnson Space Center in Houston, are the most extensive, best-documented, and leastcontaminated extraterrestrial samples that are provided to the worldwide research community. These samples include lunar samples from the Apollo missions, meteorites collected over nearly 40 years of expeditions to Antarctica (providing samples of dozens of asteroid bodies, the Moon, and Mars), Genesis solar wind samples, cosmic dust collected by NASA's high altitude airplanes, Comet Wild 2 and interstellar dust samples from the Stardust mission, and asteroid samples from JAXA's Hayabusa mission. A full account of NASA's curation efforts for these collections is provided by Allen, et al [1]. On average, we annually allocate about 1500 individual samples from NASA's astromaterials collections to hundreds of researchers from around the world, including graduate students and post-doctoral scientists; our allocation rate has roughly doubled over the past 10 years. The curation protocols developed for the lunar samples returned from the Apollo missions remain relevant and are adapted to new and future missions. Several lessons from the Apollo missions, including the need for early involvement of curation scientists in mission planning [1], have been applied to all subsequent sample return campaigns. From the 2013 National Academy of Sciences report [2]: "Curation is the critical interface between sample return missions and laboratory research. Proper curation has maintained the scientific integrity and utility of the Apollo, Antarctic meteorite, and cosmic dust collections for decades. Each of these collections continues to yield important new science. In the past decade, new state-of-the-art curatorial facilities for the Genesis and Stardust missions were key to the scientific breakthroughs provided by these missions." The results speak for themselves: research on NASA's astromaterials result in hundreds of papers annually, yield fundamental discoveries about the evolution of the solar system (e.g. [3] and references contained therein), and serve the global scientific community as ground truth for current and planned missions such as NASA's Dawn mission to Vesta and Ceres, and the future OSIRIS REx mission to asteroid Bennu [1,3
THPdb: Database of FDA-approved peptide and protein therapeutics.
Usmani, Salman Sadullah; Bedi, Gursimran; Samuel, Jesse S; Singh, Sandeep; Kalra, Sourav; Kumar, Pawan; Ahuja, Anjuman Arora; Sharma, Meenu; Gautam, Ankur; Raghava, Gajendra P S
2017-01-01
THPdb (http://crdd.osdd.net/raghava/thpdb/) is a manually curated repository of Food and Drug Administration (FDA) approved therapeutic peptides and proteins. The information in THPdb has been compiled from 985 research publications, 70 patents and other resources like DrugBank. The current version of the database holds a total of 852 entries, providing comprehensive information on 239 US-FDA approved therapeutic peptides and proteins and their 380 drug variants. The information on each peptide and protein includes their sequences, chemical properties, composition, disease area, mode of activity, physical appearance, category or pharmacological class, pharmacodynamics, route of administration, toxicity, target of activity, etc. In addition, we have annotated the structure of most of the protein and peptides. A number of user-friendly tools have been integrated to facilitate easy browsing and data analysis. To assist scientific community, a web interface and mobile App have also been developed.
The liquid biopsy in lung cancer.
Ansari, Junaid; Yun, Jungmi W; Kompelli, Anvesh R; Moufarrej, Youmna E; Alexander, Jonathan S; Herrera, Guillermo A; Shackelford, Rodney E
2016-11-01
The incidence of lung cancer has significantly increased over the last century, largely due to smoking, and remains the most common cause of cancer deaths worldwide. This is often due to lung cancer first presenting at late stages and a lack of curative therapeutic options at these later stages. Delayed diagnoses, inadequate tumor sampling, and lung cancer misdiagnoses are also not uncommon due to the limitations of the tissue biopsy. Our better understanding of the tumor microenvironment and the systemic actions of tumors, combined with the recent advent of the liquid biopsy, may allow molecular diagnostics to be done on circulating tumor markers, particularly circulating tumor DNA. Multiple liquid biopsy molecular methods are presently being examined to determine their efficacy as surrogates to the tumor tissue biopsy. This review will focus on new liquid biopsy technologies and how they may assist in lung cancer detection, diagnosis, and treatment.
Sex workers and AIDS in Pakistan. NCIH's Women's Reproductive Health Initiative.
Khalji, T
1997-01-01
A recent study interviewing several sex workers and health care providers in Lahore's red light district found that sex workers seem to prefer curative care over preventive health measures. That preference, together with the lack of acknowledgement of any kind of commercial sex work, has led to an increase in the incidence of HIV infection and other sexually transmitted diseases (STDs). The commercial sex business is thriving in Lahore, but the stigma of female sexuality has hampered widespread awareness of the causes and treatment of AIDS. Many prostitutes take medication used by a colleague or use herbal concoctions to treat STDs. Language and culture are two of several obstacles to educating these women. Sex workers in Lahore's red light district have neither regular gynecological examinations nor general medical check-ups. Donor assistance for education and HIV/STD prevention interventions among sex workers is lacking.
Current Approaches and Recent Developments in the Management of Head and Neck Paragangliomas
Kaliski, Alexandre; Boedeker, Carsten C.; Martucci, Victoria; Fojo, Tito; Adler, John R.
2014-01-01
Head and neck paragangliomas (HNPGLs) are rare neuroendocrine tumors belonging to the family of pheochromocytoma/paraganglioma neoplasms. Despite advances in understanding the pathogenesis of these tumors, the growth potential and clinical outcome of individual cases remains largely unpredictable. Over several decades, surgical resection has long been the treatment of choice for HNPGLs. However, increasing experience in various forms of radiosurgery has been reported to result in curative-like outcomes, even for tumors localized in the most inaccessible anatomical areas. The emergence of such new therapies challenges the traditional paradigm for the management of HNPGLs. This review will assist and guide physicians who encounter patients with such tumors, either from a diagnostic or therapeutic standpoint. This review will also particularly emphasize current and emerging knowledge in genetics, imaging, and therapeutic options as well as the health-related quality of life for patients with HNPGLs. PMID:25033281
Through their own eyes: a media-based group approach to adolescent trauma.
Tosone, Carol; Gelman, Caroline Rosenthal; McVeigh, Lynne
2005-07-01
This paper describes the process of two groups of students from high schools located in the immediate vicinity of the World Trade Center grappling to make sense of the events of September 11 through the creation of a documentary chronicling their experiences. The process of creating these videos mirrored the process and curative factors of a psychotherapy group in a non-stigmatizing, innovative, and accessible format, one generated by the students themselves with the assistance of professionals in the visual and performing arts. After reviewing the literature on the potential impact of violence on adolescents and the use of group treatment, especially in school settings, as an optimal choice for this population, we describe the distinctive process of the two separate groups of students, each culminating in different expressions of their very personal experience of September 11. We understand and contextualize their process through the lens of the therapeutic dynamics and elements of group work.
Organic Contamination Baseline Study on NASA JSC Astromaterial Curation Gloveboxes
NASA Technical Reports Server (NTRS)
Calaway, Michael J.; Allton, J. H.; Allen, C. C.; Burkett, P. J.
2013-01-01
Future planned sample return missions to carbon-rich asteroids and Mars in the next two decades will require strict handling and curation protocols as well as new procedures for reducing organic contamination. After the Apollo program, astromaterial collections have mainly been concerned with inorganic contamination [1-4]. However, future isolation containment systems for astromaterials, possibly nitrogen enriched gloveboxes, must be able to reduce organic and inorganic cross-contamination. In 2012, a baseline study was orchestrated to establish the current state of organic cleanliness in gloveboxes used by NASA JSC astromaterials curation labs that could be used as a benchmark for future mission designs.
Lunar and Meteorite Thin Sections for Undergraduate and Graduate Studies
NASA Technical Reports Server (NTRS)
Allen, J.; Galindo, C.; Luckey, M.; Reustle, J.; Todd, N.; Allen, C.
2012-01-01
The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Between 1969 and 1972 six Apollo missions brought back 382 kilograms of lunar rocks, core samples, pebbles, sand and dust from the lunar surface. JSC also curates meteorites collected on US expeditions to Antarctica including rocks from Moon, Mars, and many asteroids including Vesta. Studies of rock and soil samples from the Moon and meteorites continue to yield useful information about the early history of the Moon, the Earth, and the inner solar system.
16. VIEW OF ROBERT VOGEL, CURATOR, DIVISION OF MECHANICAL & ...
16. VIEW OF ROBERT VOGEL, CURATOR, DIVISION OF MECHANICAL & CIVIL ENGINEER, NATIONAL MUSEUM OF AMERICAN HISTORY, SMITHSONIAN INSTITUTION, SITTING IN ELEVATOR CAR. MR. VOGEL IS RESPONSIBLE FOR THE RELOCATION OF THE ELEVATOR TO THE SMITHSONIAN INSTITUTION - 72 Marlborough Street, Residential Hydraulic Elevator, Boston, Suffolk County, MA
Triage by ranking to support the curation of protein interactions
Pasche, Emilie; Gobeill, Julien; Rech de Laval, Valentine; Gleizes, Anne; Michel, Pierre-André; Bairoch, Amos
2017-01-01
Abstract Today, molecular biology databases are the cornerstone of knowledge sharing for life and health sciences. The curation and maintenance of these resources are labour intensive. Although text mining is gaining impetus among curators, its integration in curation workflow has not yet been widely adopted. The Swiss Institute of Bioinformatics Text Mining and CALIPHO groups joined forces to design a new curation support system named nextA5. In this report, we explore the integration of novel triage services to support the curation of two types of biological data: protein–protein interactions (PPIs) and post-translational modifications (PTMs). The recognition of PPIs and PTMs poses a special challenge, as it not only requires the identification of biological entities (proteins or residues), but also that of particular relationships (e.g. binding or position). These relationships cannot be described with onto-terminological descriptors such as the Gene Ontology for molecular functions, which makes the triage task more challenging. Prioritizing papers for these tasks thus requires the development of different approaches. In this report, we propose a new method to prioritize articles containing information specific to PPIs and PTMs. The new resources (RESTful APIs, semantically annotated MEDLINE library) enrich the neXtA5 platform. We tuned the article prioritization model on a set of 100 proteins previously annotated by the CALIPHO group. The effectiveness of the triage service was tested with a dataset of 200 annotated proteins. We defined two sets of descriptors to support automatic triage: the first set to enrich for papers with PPI data, and the second for PTMs. All occurrences of these descriptors were marked-up in MEDLINE and indexed, thus constituting a semantically annotated version of MEDLINE. These annotations were then used to estimate the relevance of a particular article with respect to the chosen annotation type. This relevance score was combined with a local vector-space search engine to generate a ranked list of PMIDs. We also evaluated a query refinement strategy, which adds specific keywords (such as ‘binds’ or ‘interacts’) to the original query. Compared to PubMed, the search effectiveness of the nextA5 triage service is improved by 190% for the prioritization of papers with PPIs information and by 260% for papers with PTMs information. Combining advanced retrieval and query refinement strategies with automatically enriched MEDLINE contents is effective to improve triage in complex curation tasks such as the curation of protein PPIs and PTMs. Database URL: http://candy.hesge.ch/nextA5 PMID:29220432
See, William A
2014-11-01
To compare perioperative morbidity and oncological outcomes of robot-assisted laparoscopic radical cystectomy (RARC) to open RC (ORC) at a single institution. A retrospective analysis was performed on a consecutive series of patients undergoing RC (100 RARC and 100 ORC) at Wake Forest University with curative intent from 2006 until 2010. Complication data using the Clavien system were collected for 90 days postoperatively. Complications and other perioperative outcomes were compared between patient groups. Patients in both groups had comparable preoperative characteristics. The overall and major complication (Clavien ≥ 3) rates were lower for RARC patients at 35 vs 57% (P = 0.001) and 10 vs 22% (P = 0.019), respectively. There were no significant differences between groups for pathological outcomes, including stage, number of nodes harvested or positive margin rates. Our data suggest that patients undergoing RARC have perioperative oncological outcomes comparable with ORC, with fewer overall or major complications. Definitive claims about comparative outcomes with RARC require results from larger, randomised controlled trials. Copyright © 2014 Elsevier Inc. All rights reserved.
Miyata, Tatsunori; Yamashita, Yo-Ichi; Yamao, Takanobu; Umezaki, Naoki; Tsukamoto, Masayo; Kitano, Yuki; Yamamura, Kensuke; Arima, Kota; Kaida, Takayoshi; Nakagawa, Shigeki; Imai, Katsunori; Hashimoto, Daisuke; Chikamoto, Akira; Ishiko, Takatoshi; Baba, Hideo
2017-06-01
The postoperative complication is one of an indicator of poor prognosis in patients with several gastroenterological cancers after curative operations. We, herein, examined prognostic impacts of postoperative complications in patients with intrahepatic cholangiocarcinoma after curative operations. We retrospectively analyzed 60 patients with intrahepatic cholangiocarcinoma who underwent primary curative operations from June 2002 to February 2016. Prognostic impacts of postoperative complications were analyzed using log-rank test and Cox proportional hazard model. Postoperative complications (Clavien-Dindo classification grade 3 or more) occurred in 13 patients (21.7%). Overall survival of patients without postoperative complications was significantly better than that of patients with postoperative complications (p = 0.025). Postoperative complications are independent prognostic factor of overall survival (hazard ratio 3.02; p = 0.030). In addition, bile duct resection and reconstruction (Odds ratio 59.1; p = 0.002) and hepatitis C virus antibody positive (Odds ratio 7.14; p= 0.022), and lymph node dissection (Odds ratio 6.28; p = 0.040) were independent predictors of postoperative complications. Postoperative complications may be an independent predictor of poorer survival in patients with intrahepatic cholangiocarcinoma after curative operations. Lymph node dissection and bile duct resection and reconstruction were risk factors for postoperative complications, therefore we should pay attentions to perform lymph node dissections, bile duct resection and reconstruction in patients with intrahepatic cholangiocarcinoma.
Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce
2015-01-01
Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.
Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E.
2017-01-01
The pathogen–host interactions database (PHI-base) is available at www.phi-base.org. PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen–host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. PMID:27915230
The BioGRID Interaction Database: 2011 update
Stark, Chris; Breitkreutz, Bobby-Joe; Chatr-aryamontri, Andrew; Boucher, Lorrie; Oughtred, Rose; Livstone, Michael S.; Nixon, Julie; Van Auken, Kimberly; Wang, Xiaodong; Shi, Xiaoqi; Reguly, Teresa; Rust, Jennifer M.; Winter, Andrew; Dolinski, Kara; Tyers, Mike
2011-01-01
The Biological General Repository for Interaction Datasets (BioGRID) is a public database that archives and disseminates genetic and protein interaction data from model organisms and humans (http://www.thebiogrid.org). BioGRID currently holds 347 966 interactions (170 162 genetic, 177 804 protein) curated from both high-throughput data sets and individual focused studies, as derived from over 23 000 publications in the primary literature. Complete coverage of the entire literature is maintained for budding yeast (Saccharomyces cerevisiae), fission yeast (Schizosaccharomyces pombe) and thale cress (Arabidopsis thaliana), and efforts to expand curation across multiple metazoan species are underway. The BioGRID houses 48 831 human protein interactions that have been curated from 10 247 publications. Current curation drives are focused on particular areas of biology to enable insights into conserved networks and pathways that are relevant to human health. The BioGRID 3.0 web interface contains new search and display features that enable rapid queries across multiple data types and sources. An automated Interaction Management System (IMS) is used to prioritize, coordinate and track curation across international sites and projects. BioGRID provides interaction data to several model organism databases, resources such as Entrez-Gene and other interaction meta-databases. The entire BioGRID 3.0 data collection may be downloaded in multiple file formats, including PSI MI XML. Source code for BioGRID 3.0 is freely available without any restrictions. PMID:21071413
2013-01-01
Background The objective of this study was to compare the socioeconomic and family characteristics of underprivileged schoolchildren with and without curative dental needs participating in a dental health program. Methods A random sample of 1411 of 8-to-10 year-old Brazilian schoolchildren was examined and two sample groups were included in the cross-sectional study: 544 presented curative dental needs and the other 867 schoolchildren were without curative dental needs. The schoolchildren were examined for the presence of caries lesions using the DMFT index and their parents were asked to answer questions about socioenvironmental characteristics of their families. Logistic regression models were adjusted estimating the Odds Ratios (OR), their 95% confidence intervals (CI), and significance levels. Results After adjusting for potential confounders, it was found that families earning more than one Brazilian minimum wage, having fewer than four residents in the house, families living in homes owned by them, and children living with both biological parents were protective factors for the presence of dental caries, and consequently, curative dental needs. Conclusions Socioeconomic status and family structure influences the curative dental needs of children from underprivileged communities. In this sense, dental health programs should plan and implement strategic efforts to reduce inequities in oral health status and access to oral health services of vulnerable schoolchildren and their families. PMID:24138683
Physicians' evaluations of patients' decisions to refuse oncological treatment
van Kleffens, T; van Leeuwen, E
2005-01-01
Objective: To gain insight into the standards of rationality that physicians use when evaluating patients' treatment refusals. Design of the study: Qualitative design with indepth interviews. Participants: The study sample included 30 patients with cancer and 16 physicians (oncologists and general practitioners). All patients had refused a recommended oncological treatment. Results: Patients base their treatment refusals mainly on personal values and/or experience. Physicians mainly emphasise the medical perspective when evaluating patients' treatment refusals. From a medical perspective, a patient's treatment refusal based on personal values and experience is generally evaluated as irrational and difficult to accept, especially when it concerns a curative treatment. Physicians have a different attitude towards non-curative treatments and have less difficulty accepting a patient's refusal of these treatments. Thus, an important factor in the physician's evaluation of a treatment refusal is whether the treatment refused is curative or non-curative. Conclusion: Physicians mainly use goal oriented and patients mainly value oriented rationality, but in the case of non-curative treatment refusal, physicians give more emphasis to value oriented rationality. A consensus between the value oriented approaches of patient and physician may then emerge, leading to the patient's decision being understood and accepted by the physician. The physician's acceptance is crucial to his or her attitude towards the patient. It contributes to the patient's feeling free to decide, and being understood and respected, and thus to a better physician–patient relationship. PMID:15738431
Liao, Rui; Fu, Yi-Peng; Wang, Ting; Deng, Zhi-Gang; Li, De-Wei; Fan, Jia; Zhou, Jian; Feng, Gen-Sheng; Qiu, Shuang-Jian; Du, Cheng-You
2017-01-03
Although Metavir and Fibrosis-4 (FIB-4) scores are typically used to assess the severity of liver fibrosis, the relationship between these scores and patient outcome in hepatocellular carcinoma (HCC) is unclear. The aim of this study was to evaluate the prognostic value of the severity of hepatic fibrosis in HBV-related HCC patients after curative resection. We examined the prognostic roles of the Metavir and preoperative FIB-4 scores in 432 HBV-HCC patients who underwent curative resection at two different medical centers located in western (Chongqing) and eastern (Shanghai) China. In the testing set (n = 108), the Metavir, FIB-4, and combined Metavir/FIB-4 scores were predictive of overall survival (OS) and recurrence-free survival (RFS). Additionally, they were associated with several clinicopathologic variables. In the validation set (n = 324), the Metavir, FIB-4, and combined Metavir/FIB-4 scores were associated with poor prognosis in HCC patients after curative resection. Importantly, in the negative alpha-fetoprotein subgroup (≤ 20 ng/mL), the FIB-4 index (I vs. II) could discriminate between patient outcomes (high or low OS and RFS). Thus Metavir, preoperative FIB-4, and combined Metavir/FIB-4 scores are prognostic markers in HBV-HCC patients after curative hepatectomy.
Li, De-Wei; Fan, Jia; Zhou, Jian; Feng, Gen-Sheng; Qiu, Shuang-Jian; Du, Cheng-You
2017-01-01
Although Metavir and Fibrosis-4 (FIB-4) scores are typically used to assess the severity of liver fibrosis, the relationship between these scores and patient outcome in hepatocellular carcinoma (HCC) is unclear. The aim of this study was to evaluate the prognostic value of the severity of hepatic fibrosis in HBV-related HCC patients after curative resection. We examined the prognostic roles of the Metavir and preoperative FIB-4 scores in 432 HBV-HCC patients who underwent curative resection at two different medical centers located in western (Chongqing) and eastern (Shanghai) China. In the testing set (n = 108), the Metavir, FIB-4, and combined Metavir/FIB-4 scores were predictive of overall survival (OS) and recurrence-free survival (RFS). Additionally, they were associated with several clinicopathologic variables. In the validation set (n = 324), the Metavir, FIB-4, and combined Metavir/FIB-4 scores were associated with poor prognosis in HCC patients after curative resection. Importantly, in the negative alpha-fetoprotein subgroup (≤ 20 ng/mL), the FIB-4 index (I vs. II) could discriminate between patient outcomes (high or low OS and RFS). Thus Metavir, preoperative FIB-4, and combined Metavir/FIB-4 scores are prognostic markers in HBV-HCC patients after curative hepatectomy. PMID:27662665
2014-01-01
Background Children with cancer, parents, and clinicians, face difficult decisions when cure is no longer possible. Little is known about decision-making processes, how agreement is reached, or perspectives of different actors. Professionals voice concerns about managing parental expectations and beliefs, which can be contrary to their own and may change over time. We conducted the first systematic review to determine what constitutes best medico-legal practice for children under 19 years as context to exploring the perspectives of actors who make judgements and decisions when cancer treatment is no longer curative. Methods Theory-informed mixed-method thematic systematic review with theory development. Results Eight legal/ethical guidelines and 18 studies were included. Whilst there were no unresolved dilemmas, actors had different perspectives and motives. In line with guidelines, the best interests of the individual child informed decisions, although how different actors conceptualized ‘best interests’ when treatment was no longer curative varied. Respect for autonomy was understood as following child/parent preferences, which varied from case to case. Doctors generally shared information so that parents alone could make an informed decision. When parents received reliable information, and personalized interest in their child, they were more likely to achieve shared trust and clearer transition to palliation. Although under-represented in research studies, young people’s perspectives showed some differences to those of parents and professionals. For example, young people preferred to be informed even when prognosis was poor, and they had an altruistic desire to help others by participating in research. Conclusion There needs to be fresh impetus to more effectively and universally implement the ethics of professionalism into daily clinical practice in order to reinforce humanitarian attitudes. Ethical guidelines and regulations attempt to bring professionals together by articulating shared values. While important, ethics training must be supported by institutions/organizations to assist doctors to maintain good professional standards. Findings will hopefully stimulate further normative and descriptive lines of research in this complex under-researched field. Future research needs to be undertaken through a more deliberative cultural lens that includes children’s and multi-disciplinary team members’ perspectives to more fully characterize and understand the dynamics of the decision-making process in this specific end-of life context. PMID:24884514
NASA Astrophysics Data System (ADS)
Noren, A.; Brady, K.; Myrbo, A.; Ito, E.
2007-12-01
Lacustrine sediment cores comprise an integral archive for the determination of continental paleoclimate, for their potentially high temporal resolution and for their ability to resolve spatial variability in climate across vast sections of the globe. Researchers studying these archives now have a large, nationally-funded, public facility dedicated to the support of their efforts. The LRC LacCore Facility, funded by NSF and the University of Minnesota, provides free or low-cost assistance to any portion of research projects, depending on the specific needs of the project. A large collection of field equipment (site survey equipment, coring devices, boats/platforms, water sampling devices) for nearly any lacustrine setting is available for rental, and Livingstone-type corers and drive rods may be purchased. LacCore staff can accompany field expeditions to operate these devices and curate samples, or provide training prior to device rental. The Facility maintains strong connections to experienced shipping agents and customs brokers, which vastly improves transport and importation of samples. In the lab, high-end instrumentation (e.g., multisensor loggers, high-resolution digital linescan cameras) provides a baseline of fundamental analyses before any sample material is consumed. LacCore staff provide support and training in lithological description, including smear-slide, XRD, and SEM analyses. The LRC botanical macrofossil reference collection is a valuable resource for both core description and detailed macrofossil analysis. Dedicated equipment and space for various subsample analyses streamlines these endeavors; subsamples for several analyses may be submitted for preparation or analysis by Facility technicians for a fee (e.g., carbon and sulfur coulometry, grain size, pollen sample preparation and analysis, charcoal, biogenic silica, LOI, freeze drying). The National Lacustrine Core Repository now curates ~9km of sediment cores from expeditions around the world, and stores metadata and analytical data for all cores processed at the facility. Any researcher may submit sample requests for material in archived cores. Supplies for field (e.g., polycarbonate pipe, endcaps), lab (e.g., sample containers, pollen sample spike), and curation (e.g., D-tubes) are sold at cost. In collaboration with facility users, staff continually develop new equipment, supplies, and procedures as needed in order to provide the best and most comprehensive set of services to the research community.
NASA Astrophysics Data System (ADS)
Lawrence, B.; Bennett, V.; Callaghan, S.; Juckes, M. N.; Pepler, S.
2013-12-01
The UK Centre for Environmental Data Archival (CEDA) hosts a number of formal data centres, including the British Atmospheric Data Centre (BADC), and is a partner in a range of national and international data federations, including the InfraStructure for the European Network for Earth system Simulation, the Earth System Grid Federation, and the distributed IPCC Data Distribution Centres. The mission of CEDA is to formally curate data from, and facilitate the doing of, environmental science. The twin aims are symbiotic: data curation helps facilitate science, and facilitating science helps with data curation. Here we cover how CEDA delivers this strategy by established internal processes supplemented by short-term projects, supported by staff with a range of roles. We show how CEDA adds value to data in the curated archive, and how it supports science, and show examples of the aforementioned symbiosis. We begin by discussing curation: CEDA has the formal responsibility for curating the data products of atmospheric science and earth observation research funded by the UK Natural Environment Research Council (NERC). However, curation is not just about the provider community, the consumer communities matter too, and the consumers of these data cross the boundaries of science, including engineers, medics, as well as the gamut of the environmental sciences. There is a small, and growing cohort of non-science users. For both producers and consumers of data, information about data is crucial, and a range of CEDA staff have long worked on tools and techniques for creating, managing, and delivering metadata (as well as data). CEDA "science support" staff work with scientists to help them prepare and document data for curation. As one of a spectrum of activities, CEDA has worked on data Publication as a method of both adding value to some data, and rewarding the effort put into the production of quality datasets. As such, we see this activity as both a curation and a facilitation activity. A range of more focused facilitation activities are carried out, from providing a computing platform suitable for big-data analytics (the Joint Analysis System, JASMIN), to working on distributed data analysis (EXARCH), and the acquisition of third party data to support science and impact (e.g. in the context of the facility for Climate and Environmental Monitoring from Space, CEMS). We conclude by confronting the view of Parsons and Fox (2013) that metaphors such as Data Publication, Big Iron, Science Support etc are limiting, and suggest the CEDA experience is that these sorts of activities can and do co-exist, much as they conclude they should. However, we also believe that within co-existing metaphors, production systems need to be limited in their scope, even if they are on a road to a more joined up infrastructure. We shouldn't confuse what we can do now with what we might want to do in the future.
Mantovani, Giovanni; Massa, Elena; Astara, Giorgio; Murgia, Viviana; Gramignano, Giulia; Lusso, Maria Rita; Camboni, Paolo; Ferreli, Luca; Mocci, Miria; Perboni, Simona; Mura, Loredana; Madeddu, Clelia; Macciò, Antonio
2003-01-01
In the present open non-randomized phase II study we looked for effectiveness, safety, tolerability and costs of locally applied GM-CSF in preventing or treating mucositis in patients receiving chemotherapy or chemoradiotherapy for head and neck cancer. In addition to clinical mucositis scoring system, the effects of treatment with GM-CSF were evaluated by its impact on patient quality of life and by laboratory immunological assays such as serum proinflammatory cytokines, IL-2 and leptin. The trial was designed to assess the effectiveness of local GM-CSF treatment in two different settings: i) prophylaxis of mucositis; ii) treatment of mucositis. Prophylaxis was chosen for chemoradiotherapy treatments of high mucosatoxic potential, while curative treatment was reserved for chemotherapy or chemoradiotherapy treatments of lesser potential of inducing mucositis. From January 1998 to December 2001, 68 patients entered the study. The great majority of patients of both groups had head and neck cancer, were stage IV, PS ECOG 0-1, were habitual smokers and were treated with chemotherapy and concomitant (or sequential) chemoradiotherapy. Forty-six patients were included in the 'prophylactic' setting and 22 patients in the 'curative' setting. The main findings of our study are: only 50% of patients included in the 'prophylactic' setting developed mucositis; the duration of oral mucositis from appearance until complete remission was significantly shorter in the 'prophylactic' than in the 'curative' setting; the mean grade of oral mucositis at baseline, on day 3 of therapy and on day 6 of therapy was significantly lower in the 'prophylactic' than in the 'curative' setting; 24 (55.82%) patients in the 'prophylactic' setting had grade 3/4 oral mucositis at baseline compared to 25 (80.60%) patients in the 'curative' setting (p=0.048). Thirteen (30.23%) patients in the 'prophylactic' setting had grade 3/4 oral mucositis on day 3 of therapy compared to 19 (61.29%) patients in the 'curative' setting (p=0.015); 'prophylactic' setting was able to shorten grade 3/4 oral mucositis to grade 0/1 more effectively than the 'curative' one on day 6 of therapy (p=0.05). The present clinical trial is to date by far the largest study assessing the effectiveness of topical GM-CSF and it is the first study comparing the efficacy of topical GM-CSF in the 'prophylactic' setting, i.e., with the aim to prevent the chemoradiotherapy-induced oral mucositis, with that in the 'curative' treatment, i.e., the therapy for established oral mucositis. The topical application of GM-CSF was demonstrated to be effective for oral mucositis induced by chemotherapy and chemoradiotherapy regimens. Moreover, the 'prophylactic' setting was demonstrated to be more effective than the 'curative' one.
Changing the Curation Equation: A Data Lifecycle Approach to Lowering Costs and Increasing Value
NASA Astrophysics Data System (ADS)
Myers, J.; Hedstrom, M.; Plale, B. A.; Kumar, P.; McDonald, R.; Kooper, R.; Marini, L.; Kouper, I.; Chandrasekar, K.
2013-12-01
What if everything that researchers know about their data, and everything their applications know, were directly available to curators? What if all the information that data consumers discover and infer about data were also available? What if curation and preservation activities occurred incrementally, during research projects instead of after they end, and could be leveraged to make it easier to manage research data from the moment of its creation? These are questions that the Sustainable Environments - Actionable Data (SEAD) project, funded as part of the National Science Foundation's DataNet partnership, was designed to answer. Data curation is challenging, but it is made more difficult by the historical separation of data production, data use, and formal curation activities across organizations, locations, and applications, and across time. Modern computing and networking technologies allow a much different approach in which data and metadata can easily flow between these activities throughout the data lifecycle, and in which heterogeneous and evolving data and metadata can be managed. Sustainability research, SEAD's initial focus area, is a clear example of an area where the nature of the research (cross-disciplinary, integrating heterogeneous data from independent sources, small teams, rapid evolution of sensing and analysis techniques) and the barriers and costs inherent in traditional methods have limited adoption of existing curation tools and techniques, to the detriment of overall scientific progress. To explore these ideas and create a sustainable curation capability for communities such as sustainability research, the SEAD team has developed and is now deploying an interacting set of open source data services that demonstrate this approach. These services provide end-to-end support for management of data during research projects; publication of that data into long-term archives; and integration of it into community networks of publications, research center activities, and synthesis efforts. They build on a flexible ';semantic content management' architecture and incorporate notions of ';active' and ';social' curation - continuous, incremental curation activities performed by the data producers (active) and the community (social) that are motivated by a range of direct benefits. Examples include the use of metadata (tags) to allow generation of custom geospatial maps, automated metadata extraction to generate rich data pages for known formats, and the use of information about data authorship to allow automatic updates of personal and project research profiles when data is published. In this presentation, we describe the core capabilities of SEAD's services and their application in sustainability research. We also outline the key features of the SEAD architecture - the use of global semantic identifiers, extensible data and metadata models, web services to manage context shifts, scalable cloud storage - and highlight how this approach is particularly well suited to extension by independent third parties. We conclude with thoughts on how this approach can be applied to challenging issues such as exposing ';dark' data and reducing duplicate creation of derived data products, and can provide a new level of analytics for community analysis and coordination.
Competency-Based Curriculum: An Effective Approach to Digital Curation Education
ERIC Educational Resources Information Center
Kim, Jeonghyun
2015-01-01
The University of North Texas conducted a project involving rigorous curriculum development and instructional design to address the goal of building capacity in the Library and Information Sciences curriculum. To prepare information professionals with the competencies needed for digital curation and data management practice, the project developed…
CARD 2017: expansion and model-centric curation of the Comprehensive Antibiotic Resistance Database
USDA-ARS?s Scientific Manuscript database
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins, and mutations involved in AMR. CARD is ontologi...
7 CFR 504.4 - Exemptions from user fee charges.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF AGRICULTURE USER FEES § 504.4 Exemptions from user fee charges. (a) USDA laboratories and ARS cooperators designated by the Curator of the ARS Patent Culture Collection are exempt from fee assessments. (b) The Curator of the ARS Patent Culture Collection is delegated the authority to approve and revoke...
7 CFR 504.4 - Exemptions from user fee charges.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF AGRICULTURE USER FEES § 504.4 Exemptions from user fee charges. (a) USDA laboratories and ARS cooperators designated by the Curator of the ARS Patent Culture Collection are exempt from fee assessments. (b) The Curator of the ARS Patent Culture Collection is delegated the authority to approve and revoke...
7 CFR 504.4 - Exemptions from user fee charges.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF AGRICULTURE USER FEES § 504.4 Exemptions from user fee charges. (a) USDA laboratories and ARS cooperators designated by the Curator of the ARS Patent Culture Collection are exempt from fee assessments. (b) The Curator of the ARS Patent Culture Collection is delegated the authority to approve and revoke...
7 CFR 504.4 - Exemptions from user fee charges.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF AGRICULTURE USER FEES § 504.4 Exemptions from user fee charges. (a) USDA laboratories and ARS cooperators designated by the Curator of the ARS Patent Culture Collection are exempt from fee assessments. (b) The Curator of the ARS Patent Culture Collection is delegated the authority to approve and revoke...
7 CFR 504.4 - Exemptions from user fee charges.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OF AGRICULTURE USER FEES § 504.4 Exemptions from user fee charges. (a) USDA laboratories and ARS cooperators designated by the Curator of the ARS Patent Culture Collection are exempt from fee assessments. (b) The Curator of the ARS Patent Culture Collection is delegated the authority to approve and revoke...
The art of curation at a biological database: principles and application
USDA-ARS?s Scientific Manuscript database
The variety and quantity of data being produced by biological research has grown dramatically in recent years, resulting in an expansion of our understanding of biological systems. However, this abundance of data has brought new challenges, especially in curation. The role of biocurators is in part ...
Curating and Nudging in Virtual CLIL Environments
ERIC Educational Resources Information Center
Nielsen, Helle Lykke
2014-01-01
Foreign language teachers can benefit substantially from the notions of curation and nudging when scaffolding CLIL activities on the internet. This article shows how these principles can be integrated into CLILstore, a free multimedia-rich learning tool with seamless access to online dictionaries, and presents feedback from first and second year…
Kids as Curators: Virtual Art at the Seattle Museum.
ERIC Educational Resources Information Center
Scanlan, Laura Wolff
2000-01-01
Discusses the use of technology at the Seattle Art Museum (Washington). Includes a Web site that enables students in grades six through ten to act as curators and offers integrations of technology in the exhibition "Leonardo Lives: The Codex Leicester and Leonardo da Vinci's Legacy of Art and Science." (CMK)
Curation of US Martian Meteorites Collected in Antarctica
NASA Technical Reports Server (NTRS)
Lindstrom, M.; Satterwhite, C.; Allton, J.; Stansbury, E.
1998-01-01
To date the ANSMET field team has collected five martian meteorites (see below) in Antarctica and returned them for curation at the Johnson Space Center (JSC) Meteorite Processing Laboratory (MPL). ne meteorites were collected with the clean procedures used by ANSMET in collecting all meteorites: They were handled with JSC-cleaned tools, packaged in clean bags, and shipped frozen to JSC. The five martian meteorites vary significantly in size (12-7942 g) and rock type (basalts, lherzolites, and orthopyroxenite). Detailed descriptions are provided in the Mars Meteorite compendium, which describes classification, curation and research results. A table gives the names, classifications and original and curatorial masses of the martian meteorites. The MPL and measures for contamination control are described.
Curated protein information in the Saccharomyces genome database.
Hellerstedt, Sage T; Nash, Robert S; Weng, Shuai; Paskov, Kelley M; Wong, Edith D; Karra, Kalpana; Engel, Stacia R; Cherry, J Michael
2017-01-01
Due to recent advancements in the production of experimental proteomic data, the Saccharomyces genome database (SGD; www.yeastgenome.org ) has been expanding our protein curation activities to make new data types available to our users. Because of broad interest in post-translational modifications (PTM) and their importance to protein function and regulation, we have recently started incorporating expertly curated PTM information on individual protein pages. Here we also present the inclusion of new abundance and protein half-life data obtained from high-throughput proteome studies. These new data types have been included with the aim to facilitate cellular biology research. : www.yeastgenome.org. © The Author(s) 2017. Published by Oxford University Press.
Contracting for health and curative care use in Afghanistan between 2004 and 2005
Arur, Aneesa; Peters, David; Hansen, Peter; Mashkoor, Mohammad Ashraf; Steinhardt, Laura C.; Burnham, Gilbert
2010-01-01
Afghanistan has used several approaches to contracting as part of its national strategy to increase access to basic health services. This study compares changes in the utilization of outpatient curative services from 2004 to 2005 between the different approaches for contracting-out services to non-governmental service providers, contracting-in technical assistance at public sector facilities, and public sector facilities that did not use contracting. We find that both contracting-in and contracting-out approaches are associated with substantial double difference increases in service use from 2004 to 2005 compared with non-contracted facilities. The double difference increase in contracting-out facilities for outpatient visits is 29% (P < 0.01), while outpatient visits from female patients increased 41% (P < 0.01), use by the poorest quintile increased 68% (P < 0.01) and use by children aged under 5 years increased 27% (P < 0.05). Comparing the individual contracting-out approaches, we find similar increases in outpatient visits when contracts are managed directly by the Ministry of Public Health compared with when contracts are managed by an experienced international non-profit organization. Finally, contracting-in facilities show even larger increases in all the measures of utilization other than visits from children under 5. Although there are minor differences in the results between contracting-out approaches, these differences cannot be attributed to a specific contracting-out approach because of factors limiting the comparability of the groups. It is nonetheless clear that the government was able to manage contracts effectively despite early concerns about their lack of experience, and that contracting has helped to improve utilization of basic health services. PMID:19850664
Bouadjenek, Mohamed Reda; Verspoor, Karin; Zobel, Justin
2017-07-01
We investigate and analyse the data quality of nucleotide sequence databases with the objective of automatic detection of data anomalies and suspicious records. Specifically, we demonstrate that the published literature associated with each data record can be used to automatically evaluate its quality, by cross-checking the consistency of the key content of the database record with the referenced publications. Focusing on GenBank, we describe a set of quality indicators based on the relevance paradigm of information retrieval (IR). Then, we use these quality indicators to train an anomaly detection algorithm to classify records as "confident" or "suspicious". Our experiments on the PubMed Central collection show assessing the coherence between the literature and database records, through our algorithms, is an effective mechanism for assisting curators to perform data cleansing. Although fewer than 0.25% of the records in our data set are known to be faulty, we would expect that there are many more in GenBank that have not yet been identified. By automated comparison with literature they can be identified with a precision of up to 10% and a recall of up to 30%, while strongly outperforming several baselines. While these results leave substantial room for improvement, they reflect both the very imbalanced nature of the data, and the limited explicitly labelled data that is available. Overall, the obtained results show promise for the development of a new kind of approach to detecting low-quality and suspicious sequence records based on literature analysis and consistency. From a practical point of view, this will greatly help curators in identifying inconsistent records in large-scale sequence databases by highlighting records that are likely to be inconsistent with the literature. Copyright © 2017 Elsevier Inc. All rights reserved.
Use of New Treatment Modalities for Non-small Cell Lung Cancer Care in the Medicare Population
Vest, Michael T.; Herrin, Jeph; Soulos, Pamela R.; Decker, Roy H.; Tanoue, Lynn; Michaud, Gaetane; Kim, Anthony W.; Detterbeck, Frank; Morgensztern, Daniel
2013-01-01
Background: Many older patients with early stage non-small cell lung cancer (NSCLC) do not receive curative therapy. New surgical techniques and radiation therapy modalities, such as video-assisted thoracoscopic surgery (VATS), potentially allow more patients to receive treatment. The adoption of these techniques and their impact on access to cancer care among Medicare beneficiaries with stage I NSCLC are unknown. Methods: We used the Surveillance, Epidemiology and End Results-Medicare database to identify patients with stage I NSCLC diagnosed between 1998 and 2007. We assessed temporal trends and created hierarchical generalized linear models of the relationship between patient, clinical, and regional factors and type of treatment. Results: The sample comprised 13,458 patients with a mean age of 75.7 years. The proportion of patients not receiving any local treatment increased from 14.6% in 1998 to 18.3% in 2007. The overall use of surgical resection declined from 75.2% to 67.3% (P < .001), although the proportion of patients undergoing VATS increased from 11.3% to 32.0%. Similarly, although the use of new radiation modalities increased from 0% to 5.2%, the overall use of radiation remained stable. The oldest patients were less likely to receive surgical vs no treatment (OR, 0.12; 95% CI, 0.09-0.16) and more likely to receive radiation vs surgery (OR, 13.61; 95% CI, 9.75-19.0). Conclusion: From 1998 to 2007, the overall proportion of older patients with stage I NSCLC receiving curative local therapy decreased, despite the dissemination of newer, less-invasive forms of surgery and radiation. PMID:23187634
The Five Cs of Digital Curation: Supporting Twenty-First-Century Teaching and Learning
ERIC Educational Resources Information Center
Deschaine, Mark E.; Sharma, Sue Ann
2015-01-01
Digital curation is a process that allows university professors to adapt and adopt resources from multidisciplinary fields to meet the educational needs of twenty-first-century learners. Looking through the lens of new media literacy studies (Vasquez, Harste, & Albers, 2010) and new literacies studies (Gee, 2010), we propose that university…
BC4GO: a full-text corpus for the BioCreative IV GO Task
USDA-ARS?s Scientific Manuscript database
Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database (MOD) groups. Due to its manual nature, this task is time-consuming and labor-intensive, and thus considered one of the bottlenecks in literature curation. There have been many previous attempts a...
Participants' Perception of Therapeutic Factors in Groups for Incest Survivors.
ERIC Educational Resources Information Center
Wheeler, Inese; And Others
1992-01-01
Investigated member-perceived curative factors in an incest-survivor group, comparing therapeutic factors reported in closed, time-limited incest survivor group to those in Bonney et al.'s open, long-term survivor group and to Yalom's therapy groups. Findings suggest that relative importance of curative factors may be related to group stages.…
Edited Excerpts from a Smithsonian Seminar Series: Part I: The Arts.
ERIC Educational Resources Information Center
Zilczar, Judith K.; And Others
1991-01-01
In this first of three excerpts from seminars sponsored by the Smithsonian Institution on collaborative knowledge generation in the arts, the sciences, and the humanities, two art curators and a filmmaker discuss the meaning of collaboration in their fields. Topics discussed include twentieth-century artists and art curators, Chinese art, and…
Current Trends and Future Directions in Data Curation Research and Education
ERIC Educational Resources Information Center
Weber, Nicholas M.; Palmer, Carole L.; Chao, Tiffany C.
2012-01-01
Digital research data have introduced a new set of collection, preservation, and service demands into the tradition of digital librarianship. Consequently, the role of an information professional has evolved to include the activities of data curation. This new field more specifically addresses the needs of stewarding and preserving digital…
Social Media Selves: College Students' Curation of Self and Others through Facebook
ERIC Educational Resources Information Center
Kasch, David Michael
2013-01-01
This qualitative study used cyber-ethnography and grounded theory to explore the ways in which 35 undergraduate students crafted and refined self-presentations on the social network site Facebook. Findings included the identification of two unique forms of self-presentation that students enacted: a "curated self" and a "commodified…
Geospatial Data Curation at the University of Idaho
ERIC Educational Resources Information Center
Kenyon, Jeremy; Godfrey, Bruce; Eckwright, Gail Z.
2012-01-01
The management and curation of digital geospatial data has become a central concern for many academic libraries. Geospatial data is a complex type of data critical to many different disciplines, and its use has become more expansive in the past decade. The University of Idaho Library maintains a geospatial data repository called the Interactive…
A Study of Faculty Data Curation Behaviors and Attitudes at a Teaching-Centered University
ERIC Educational Resources Information Center
Scaramozzino, Jeanine Marie; Ramírez, Marisa L.; McGaughey, Karen J.
2012-01-01
Academic libraries need reliable information on researcher data needs, data curation practices, and attitudes to identify and craft appropriate services that support outreach and teaching. This paper describes information gathered from a survey distributed to the College of Science and Mathematics faculty at California Polytechnic State…
New Roles for New Times: Digital Curation for Preservation
ERIC Educational Resources Information Center
Walters, Tyler; Skinner, Katherine
2011-01-01
Digital curation refers to the actions people take to maintain and add value to digital information over its lifecycle, including the processes used when creating digital content. Digital preservation focuses on the "series of managed activities necessary to ensure continued access to digital materials for as long as necessary." In this…
USDA-ARS?s Scientific Manuscript database
The Maize Genetics and Genomics Database (MaizeGDB) team prepared a survey to identify breeders’ needs for visualizing pedigrees, diversity data, and haplotypes in order to prioritize tool development and curation efforts at MaizeGDB. The survey was distributed to the maize research community on beh...
Curating Media Learning: Towards a Porous Expertise
ERIC Educational Resources Information Center
McDougall, Julian; Potter, John
2015-01-01
This article combines research results from a range of projects with two consistent themes. Firstly, we explore the potential for curation to offer a productive metaphor for the convergence of digital media learning across and between home/lifeworld and formal educational/system-world spaces--or between the public and private spheres. Secondly, we…
Code of Federal Regulations, 2010 CFR
2010-10-01
...; and (v) Thawing in a clean, dry, non-reactive gas environment, such as nitrogen or argon. (2) Sample..., documentation, and curation of Antarctic meteorites. 674.5 Section 674.5 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ANTARCTIC METEORITES § 674.5 Requirements for...
Code of Federal Regulations, 2012 CFR
2012-10-01
...; and (v) Thawing in a clean, dry, non-reactive gas environment, such as nitrogen or argon. (2) Sample..., documentation, and curation of Antarctic meteorites. 674.5 Section 674.5 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ANTARCTIC METEORITES § 674.5 Requirements for...
Code of Federal Regulations, 2014 CFR
2014-10-01
...; and (v) Thawing in a clean, dry, non-reactive gas environment, such as nitrogen or argon. (2) Sample..., documentation, and curation of Antarctic meteorites. 674.5 Section 674.5 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ANTARCTIC METEORITES § 674.5 Requirements for...
Code of Federal Regulations, 2013 CFR
2013-10-01
...; and (v) Thawing in a clean, dry, non-reactive gas environment, such as nitrogen or argon. (2) Sample..., documentation, and curation of Antarctic meteorites. 674.5 Section 674.5 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ANTARCTIC METEORITES § 674.5 Requirements for...
Student-Curated Exhibits: A Vehicle towards Student Engagement, Retention, and Success
ERIC Educational Resources Information Center
Marsee, Mickey; Davies-Wilson, Dennis
2014-01-01
In looking for ways to combine course content literacy and information literacy with active learning, in 2007, the English Department and Library at The University of New Mexico-Los Alamos combined efforts and created a course project for students to curate exhibits that would demonstrate their understanding of course material through library…
A Relevancy Algorithm for Curating Earth Science Data Around Phenomenon
NASA Technical Reports Server (NTRS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-01-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earthscience metadata records. Second, the methodology has been implemented as a standalone web service that is utilized to augment search and usability of data in a variety of tools.
A relevancy algorithm for curating earth science data around phenomenon
NASA Astrophysics Data System (ADS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-09-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earth science metadata records. Second, the methodology has been implemented as a stand-alone web service that is utilized to augment search and usability of data in a variety of tools.
Mimatsu, Kenji; Fukino, Nobutada; Ogasawara, Yasuo; Saino, Yoko; Oida, Takatsugu
2017-08-01
The present study aimed to compare the utility of various inflammatory marker- and nutritional status-based prognostic factors, including many previous established prognostic factors, for predicting the prognosis of stage IV gastric cancer patients undergoing non-curative surgery. A total of 33 patients with stage IV gastric cancer who had undergone palliative gastrectomy and gastrojejunostomy were included in the study. Univariate and multivariate analyses were performed to evaluate the relationships between the mGPS, PNI, NLR, PLR, the CONUT, various clinicopathological factors and cancer-specific survival (CS). Among patients who received non-curative surgery, univariate analysis of CS identified the following significant risk factors: chemotherapy, mGPS and NLR, and multivariate analysis revealed that the mGPS was independently associated with CS. The mGPS was a more useful prognostic factor than the PNI, NLR, PLR and CONUT in patients undergoing non-curative surgery for stage IV gastric cancer. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.
The transprofessional model: blending intents in terminal care of AIDS.
Cherin, D A; Simmons, W J; Hillary, K
1998-01-01
Current terminal care services present dying patients and their families with a dichotomy in service delivery and the intent care between curative treatments and palliative treatments. This arbitrary dichotomy reduces patients' quality of life in many cases and robs patients and families of benefiting from the psychosocial aspects of treatment until the last few weeks of life. This article presents a blended model of care, the Transprofessional Model, in which patients receive both curative and palliative service throughout their care process. The blended intent model differs from traditional home care in that services are provided by a care coordination team composed of nurses and social workers; the traditional model of care is often case managed by a single, registered nurse. The combination of the multi-disciplinary approach to care coordination and training in both curative and palliative services in the Transprofessional Model demonstrates that this blended model of care produces a bio-psychosocial focus to terminal care as compared to a primary focus on curative services present in the traditional model of home care.
The immune epitope database (IEDB) 3.0
Vita, Randi; Overton, James A.; Greenbaum, Jason A.; Ponomarenko, Julia; Clark, Jason D.; Cantrell, Jason R.; Wheeler, Daniel K.; Gabbard, Joseph L.; Hix, Deborah; Sette, Alessandro; Peters, Bjoern
2015-01-01
The IEDB, www.iedb.org, contains information on immune epitopes—the molecular targets of adaptive immune responses—curated from the published literature and submitted by National Institutes of Health funded epitope discovery efforts. From 2004 to 2012 the IEDB curation of journal articles published since 1960 has caught up to the present day, with >95% of relevant published literature manually curated amounting to more than 15 000 journal articles and more than 704 000 experiments to date. The revised curation target since 2012 has been to make recent research findings quickly available in the IEDB and thereby ensure that it continues to be an up-to-date resource. Having gathered a comprehensive dataset in the IEDB, a complete redesign of the query and reporting interface has been performed in the IEDB 3.0 release to improve how end users can access this information in an intuitive and biologically accurate manner. We here present this most recent release of the IEDB and describe the user testing procedures as well as the use of external ontologies that have enabled it. PMID:25300482
Disease model curation improvements at Mouse Genome Informatics
Bello, Susan M.; Richardson, Joel E.; Davis, Allan P.; Wiegers, Thomas C.; Mattingly, Carolyn J.; Dolan, Mary E.; Smith, Cynthia L.; Blake, Judith A.; Eppig, Janan T.
2012-01-01
Optimal curation of human diseases requires an ontology or structured vocabulary that contains terms familiar to end users, is robust enough to support multiple levels of annotation granularity, is limited to disease terms and is stable enough to avoid extensive reannotation following updates. At Mouse Genome Informatics (MGI), we currently use disease terms from Online Mendelian Inheritance in Man (OMIM) to curate mouse models of human disease. While OMIM provides highly detailed disease records that are familiar to many in the medical community, it lacks structure to support multilevel annotation. To improve disease annotation at MGI, we evaluated the merged Medical Subject Headings (MeSH) and OMIM disease vocabulary created by the Comparative Toxicogenomics Database (CTD) project. Overlaying MeSH onto OMIM provides hierarchical access to broad disease terms, a feature missing from the OMIM. We created an extended version of the vocabulary to meet the genetic disease-specific curation needs at MGI. Here we describe our evaluation of the CTD application, the extensions made by MGI and discuss the strengths and weaknesses of this approach. Database URL: http://www.informatics.jax.org/ PMID:22434831
WormBase 2014: new views of curated biology
Harris, Todd W.; Baran, Joachim; Bieri, Tamberlyn; Cabunoc, Abigail; Chan, Juancarlos; Chen, Wen J.; Davis, Paul; Done, James; Grove, Christian; Howe, Kevin; Kishore, Ranjana; Lee, Raymond; Li, Yuling; Muller, Hans-Michael; Nakamura, Cecilia; Ozersky, Philip; Paulini, Michael; Raciti, Daniela; Schindelman, Gary; Tuli, Mary Ann; Auken, Kimberly Van; Wang, Daniel; Wang, Xiaodong; Williams, Gary; Wong, J. D.; Yook, Karen; Schedl, Tim; Hodgkin, Jonathan; Berriman, Matthew; Kersey, Paul; Spieth, John; Stein, Lincoln; Sternberg, Paul W.
2014-01-01
WormBase (http://www.wormbase.org/) is a highly curated resource dedicated to supporting research using the model organism Caenorhabditis elegans. With an electronic history predating the World Wide Web, WormBase contains information ranging from the sequence and phenotype of individual alleles to genome-wide studies generated using next-generation sequencing technologies. In recent years, we have expanded the contents to include data on additional nematodes of agricultural and medical significance, bringing the knowledge of C. elegans to bear on these systems and providing support for underserved research communities. Manual curation of the primary literature remains a central focus of the WormBase project, providing users with reliable, up-to-date and highly cross-linked information. In this update, we describe efforts to organize the original atomized and highly contextualized curated data into integrated syntheses of discrete biological topics. Next, we discuss our experiences coping with the vast increase in available genome sequences made possible through next-generation sequencing platforms. Finally, we describe some of the features and tools of the new WormBase Web site that help users better find and explore data of interest. PMID:24194605
The Internet of Scientific Research Things
NASA Astrophysics Data System (ADS)
Chandler, Cynthia; Shepherd, Adam; Arko, Robert; Leadbetter, Adam; Groman, Robert; Kinkade, Danie; Rauch, Shannon; Allison, Molly; Copley, Nancy; Gegg, Stephen; Wiebe, Peter; Glover, David
2016-04-01
The sum of the parts is greater than the whole, but for scientific research how do we identify the parts when they are curated at distributed locations? Results from environmental research represent an enormous investment and constitute essential knowledge required to understand our planet in this time of rapid change. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) curates data from US NSF Ocean Sciences funded research awards, but BCO-DMO is only one repository in a landscape that includes many other sites that carefully curate results of scientific research. Recent efforts to use persistent identifiers (PIDs), most notably Open Researcher and Contributor ID (ORCiD) for person, Digital Object Identifier (DOI) for publications including data sets, and Open Funder Registry (FundRef) codes for research grants and awards are realizing success in unambiguously identifying the pieces that represent results of environmental research. This presentation uses BCO-DMO as a test case for adding PIDs to the locally-curated information published out as standards compliant metadata records. We present a summary of progress made thus far; what has worked and why, and thoughts on logical next steps.
NASA Astrophysics Data System (ADS)
McDonald, R. H.; Kumar, P.; Plale, B. A.; Myers, J.; Hedstrom, M. L.
2012-12-01
Effective long-term curation and preservation of data for community use has historically been limited to high-value and homogeneous collections produced by mission-oriented organizations. The technologies and practices that have been applied in these cases, e.g. relational data bases, development of comprehensive standardized vocabularies, and centralized support for reference data collections, are arguably applicable to the much broader range of data generated by the long tail of investigator-led research, with the logical conclusion of such an argument leading to the call for training, evangelism, and vastly increased funding as the best means of broadening community-scale data management. In this paper, we question this reasoning and explore how alternative approaches focused on the overall data lifecycle and the sociological and business realities of distributed multi-disciplinary research communities might dramatically lower costs, increase value, and consequently drive dramatic advances in our ability to use and re-use data, and ultimately enable more rapid scientific advance. Specifically, we introduce the concepts of active and social curation as a means to decrease coordination costs, align costs and values for individual data producers and data consumers, and improve the immediacy of returns for data curation investments. Further, we describe the specific architecture and services for active and social curation that are being prototyped within the Sustainable Environment - Actionable Data (SEAD) project within NSF's DataNet network and discuss how they are motivated by the long-tail dynamics in the cross-disciplinary sustainability research community.
Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E
2017-01-04
The pathogen-host interactions database (PHI-base) is available at www.phi-base.org PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen-host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Lee, Ji Wan; Cho, Charles J; Kim, Do Hoon; Ahn, Ji Yong; Lee, Jeong Hoon; Choi, Kee Don; Song, Ho June; Park, Sook Ryun; Lee, Hyun Joo; Kim, Yong Hee; Lee, Gin Hyug; Jung, Hwoon-Yong; Kim, Sung-Bae; Kim, Jong Hoon; Park, Seung-Il
2018-06-01
To report the long-term survival and tumor recurrence outcomes in patients with superficial esophageal cancer (SEC) after complete non-curative endoscopic resection (ER). We retrieved ER data for 24 patients with non-curatively resected SEC. Non-curative resection was defined as the presence of submucosal and/or lymphovascular invasion on ER pathology. Relevant clinical and tumor-specific parameters were reviewed. The mean age of the 24 study patients was 66.3±8.3 years. Ten patients were closely followed up without treatment, while 14 received additional treatment. During a mean follow-up of 59.0±33.2 months, the 3- and 5-year survival rates of all cases were 90.7% and 77.6%, respectively. The 5-year overall survival rates were 72.9% in the close observation group and 82.1% in the additional treatment group (p=0.958). The 5-year cumulative incidences of all cases of recurrence (25.0% vs. 43.3%, p=0.388), primary EC recurrence (10.0% vs. 16.4%, p=0.558), and metachronous EC recurrence (16.7% vs. 26.7%, p=0.667) were similar between the two groups. Patients with non-curatively resected SEC showed good long-term survival outcomes. Given the similar oncologic outcomes, close observation may be an option with appropriate caution taken for patients who are medically unfit to receive additional therapy.
Collaborative biocuration--text-mining development task for document prioritization for curation.
Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J
2012-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation is a community-wide effort for evaluating text mining and information extraction systems for the biological domain. The 'BioCreative Workshop 2012' subcommittee identified three areas, or tracks, that comprised independent, but complementary aspects of data curation in which they sought community input: literature triage (Track I); curation workflow (Track II) and text mining/natural language processing (NLP) systems (Track III). Track I participants were invited to develop tools or systems that would effectively triage and prioritize articles for curation and present results in a prototype web interface. Training and test datasets were derived from the Comparative Toxicogenomics Database (CTD; http://ctdbase.org) and consisted of manuscripts from which chemical-gene-disease data were manually curated. A total of seven groups participated in Track I. For the triage component, the effectiveness of participant systems was measured by aggregate gene, disease and chemical 'named-entity recognition' (NER) across articles; the effectiveness of 'information retrieval' (IR) was also measured based on 'mean average precision' (MAP). Top recall scores for gene, disease and chemical NER were 49, 65 and 82%, respectively; the top MAP score was 80%. Each participating group also developed a prototype web interface; these interfaces were evaluated based on functionality and ease-of-use by CTD's biocuration project manager. In this article, we present a detailed description of the challenge and a summary of the results.
Fang, Yong; Xiao, Heping; Hu, Haili
2018-01-01
Background This study aimed to compare the efficacy of closed-chest drainage with rib resection closed drainage of chronic tuberculous empyema. Methods This retrospective study reviewed 86 patients with tuberculous empyema in Shanghai Pulmonary Hospital from August 2010 to November 2015. Among these included patients, 22 patients received closed-chest drainage, and 64 patients received rib resection closed drainage. Results The results showed that after intercostal chest closed drain treatment, 2 (9.09%) patients were recovery, 13 (59.09%) patients had significantly curative effect, 6 (27.27%) patients had partly curative effect, and 1 (4.55%) patient had negative effect. After treatment of rib resection closed drainage, 9 (14.06%) patients were successfully recovery, 31 (48.44%) patients had significantly curative effect, 19 (29.69%) patients had partly curative effect, and 5 (7.81%) patients had negative effect. There was no significant difference in the curative effect (P>0.05), while the average catheterization time of rib resection closed drainage (130.05±13.12 days) was significant longer than that (126.14±36.84 days) in course of intercostal chest closed drain (P<0.05). Conclusions This study had demonstrated that closed-chest drainage was an effective procedure for treating empyema in young patients. It was less invasive than rib resection closed drainage and was associated with less severe pain. We advocated closed-chest drainage for the majority of young patients with empyema, except for those with other diseases. PMID:29600066
Does preventive health care have a chance in the changing health sector in Tanzania?
Msuya, J M; Nyaruhucha, C N M; Kaswahili, J
2003-03-01
To investigate the status and practice of preventive health care (relative to curative) in the health delivery system at the time when the health sector reforms are taking place. A cross-sectional, descriptive study. The study was conducted in Morogoro District between January and May 1999. Eighty six medical personnel and two hospital administrators from thirty four health facilities. The health facilities included twenty five dispensaries, five health centres and four hospitals. Care was also taken to include health facilities owned by various institutions and organisations, including governmental and non-governmental. Generally, preventive health received little attention compared to the curative health measures whereby more than 80% of the medical personnel in some of the facilities were assigned to curative services. Health personnel reported to spend an average of up to six hours per day providing curative services such as chemotherapy, surgical treatment, psychotherapy and radiography. On the contrary, they spent about four hours or less on providing child immunisation and education on nutrition, health and family planning. As expected, the type of ownership of a health facility influenced the extent to which preventive measures were included. For example, while all the government owned facilities did provide child immunisation, nutrition education and family planning services, some non-governmental facilities were lacking such services. It is obvious that while the provision of curative health care can be left to the hands of the private suppliers, that of preventive health care needs strong government involvement. It is suggested that deliberate efforts be taken to shift resources from curative to preventive measures. One way in which such a strategy can be attained is for the government to set, as a condition for private operators, a minimum level of preventive measures to be provided by every operator before a permit is issued. However, caution should be taken to ensure that such deliberations do not discourage investors in the health sector.
Argo: an integrative, interactive, text mining-based workbench supporting curation
Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia
2012-01-01
Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844
Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang
2015-06-06
Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating discourse level analysis significantly improved the performance of extracting the protein-mutation-disease association. Future work includes the extension of MutD for full text articles.
Integrating text mining into the MGI biocuration workflow
Dowell, K.G.; McAndrews-Hill, M.S.; Hill, D.P.; Drabkin, H.J.; Blake, J.A.
2009-01-01
A major challenge for functional and comparative genomics resource development is the extraction of data from the biomedical literature. Although text mining for biological data is an active research field, few applications have been integrated into production literature curation systems such as those of the model organism databases (MODs). Not only are most available biological natural language (bioNLP) and information retrieval and extraction solutions difficult to adapt to existing MOD curation workflows, but many also have high error rates or are unable to process documents available in those formats preferred by scientific journals. In September 2008, Mouse Genome Informatics (MGI) at The Jackson Laboratory initiated a search for dictionary-based text mining tools that we could integrate into our biocuration workflow. MGI has rigorous document triage and annotation procedures designed to identify appropriate articles about mouse genetics and genome biology. We currently screen ∼1000 journal articles a month for Gene Ontology terms, gene mapping, gene expression, phenotype data and other key biological information. Although we do not foresee that curation tasks will ever be fully automated, we are eager to implement named entity recognition (NER) tools for gene tagging that can help streamline our curation workflow and simplify gene indexing tasks within the MGI system. Gene indexing is an MGI-specific curation function that involves identifying which mouse genes are being studied in an article, then associating the appropriate gene symbols with the article reference number in the MGI database. Here, we discuss our search process, performance metrics and success criteria, and how we identified a short list of potential text mining tools for further evaluation. We provide an overview of our pilot projects with NCBO's Open Biomedical Annotator and Fraunhofer SCAI's ProMiner. In doing so, we prove the potential for the further incorporation of semi-automated processes into the curation of the biomedical literature. PMID:20157492
Integrating text mining into the MGI biocuration workflow.
Dowell, K G; McAndrews-Hill, M S; Hill, D P; Drabkin, H J; Blake, J A
2009-01-01
A major challenge for functional and comparative genomics resource development is the extraction of data from the biomedical literature. Although text mining for biological data is an active research field, few applications have been integrated into production literature curation systems such as those of the model organism databases (MODs). Not only are most available biological natural language (bioNLP) and information retrieval and extraction solutions difficult to adapt to existing MOD curation workflows, but many also have high error rates or are unable to process documents available in those formats preferred by scientific journals.In September 2008, Mouse Genome Informatics (MGI) at The Jackson Laboratory initiated a search for dictionary-based text mining tools that we could integrate into our biocuration workflow. MGI has rigorous document triage and annotation procedures designed to identify appropriate articles about mouse genetics and genome biology. We currently screen approximately 1000 journal articles a month for Gene Ontology terms, gene mapping, gene expression, phenotype data and other key biological information. Although we do not foresee that curation tasks will ever be fully automated, we are eager to implement named entity recognition (NER) tools for gene tagging that can help streamline our curation workflow and simplify gene indexing tasks within the MGI system. Gene indexing is an MGI-specific curation function that involves identifying which mouse genes are being studied in an article, then associating the appropriate gene symbols with the article reference number in the MGI database.Here, we discuss our search process, performance metrics and success criteria, and how we identified a short list of potential text mining tools for further evaluation. We provide an overview of our pilot projects with NCBO's Open Biomedical Annotator and Fraunhofer SCAI's ProMiner. In doing so, we prove the potential for the further incorporation of semi-automated processes into the curation of the biomedical literature.
Shindoh, J; Tzeng, C-W D; Aloia, T A; Curley, S A; Zimmitti, G; Wei, S H; Huang, S Y; Gupta, S; Wallace, M J; Vauthey, J-N
2013-12-01
Most patients requiring an extended right hepatectomy (ERH) have an inadequate standardized future liver remnant (sFLR) and need preoperative portal vein embolization (PVE). However, the clinical and oncological impact of PVE in such patients remains unclear. All consecutive patients presenting at the M. D. Anderson Cancer Center with colorectal liver metastases (CLM) requiring ERH at presentation from 1995 to 2012 were studied. Surgical and oncological outcomes were compared between patients with adequate and inadequate sFLRs at presentation. Of the 265 patients requiring ERH, 126 (47·5 per cent) had an adequate sFLR at presentation, of whom 123 underwent a curative resection. Of the 139 patients (52·5 per cent) who had an inadequate sFLR and underwent PVE, 87 (62·6 per cent) had a curative resection. Thus, the curative resection rate was increased from 46·4 per cent (123 of 265) at baseline to 79·2 per cent (210 of 265) following PVE. Among patients who underwent ERH, major complication and 90-day mortality rates were similar in the no-PVE and PVE groups (22·0 and 4·1 per cent versus 31 and 7 per cent respectively); overall and disease-free survival rates were also similar in these two groups. Of patients with an inadequate sFLR at presentation, those who underwent ERH had a significantly better median overall survival (50·2 months) than patients who had non-curative surgery (21·3 months) or did not undergo surgery (24·7 months) (P = 0·002). PVE enabled curative resection in two-thirds of patients with CLM who had an inadequate sFLR and were unable to tolerate ERH at presentation. Patients who underwent curative resection after PVE had overall and disease-free survival rates equivalent to those of patients who did not need PVE. © 2013 British Journal of Surgery Society Ltd. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.
2016-12-01
There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.
Bravo, Àlex; Piñero, Janet; Queralt-Rosinach, Núria; Rautschka, Michael; Furlong, Laura I
2015-02-21
Current biomedical research needs to leverage and exploit the large amount of information reported in scientific publications. Automated text mining approaches, in particular those aimed at finding relationships between entities, are key for identification of actionable knowledge from free text repositories. We present the BeFree system aimed at identifying relationships between biomedical entities with a special focus on genes and their associated diseases. By exploiting morpho-syntactic information of the text, BeFree is able to identify gene-disease, drug-disease and drug-target associations with state-of-the-art performance. The application of BeFree to real-case scenarios shows its effectiveness in extracting information relevant for translational research. We show the value of the gene-disease associations extracted by BeFree through a number of analyses and integration with other data sources. BeFree succeeds in identifying genes associated to a major cause of morbidity worldwide, depression, which are not present in other public resources. Moreover, large-scale extraction and analysis of gene-disease associations, and integration with current biomedical knowledge, provided interesting insights on the kind of information that can be found in the literature, and raised challenges regarding data prioritization and curation. We found that only a small proportion of the gene-disease associations discovered by using BeFree is collected in expert-curated databases. Thus, there is a pressing need to find alternative strategies to manual curation, in order to review, prioritize and curate text-mining data and incorporate it into domain-specific databases. We present our strategy for data prioritization and discuss its implications for supporting biomedical research and applications. BeFree is a novel text mining system that performs competitively for the identification of gene-disease, drug-disease and drug-target associations. Our analyses show that mining only a small fraction of MEDLINE results in a large dataset of gene-disease associations, and only a small proportion of this dataset is actually recorded in curated resources (2%), raising several issues on data prioritization and curation. We propose that joint analysis of text mined data with data curated by experts appears as a suitable approach to both assess data quality and highlight novel and interesting information.
Advanced Curation: Solving Current and Future Sample Return Problems
NASA Technical Reports Server (NTRS)
Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.
2015-01-01
Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon. The chemical kinetics of this reaction are poorly understood at present under the conditions of cached or curated martian samples. Among other parameters, what is the maximum temperature allowed during storage in order to preserve native martian organic compounds for analysis? What is the best means to collect headspace gases from cached martian (and other) samples? This gas will contain not only martian atmosphere but also off-gassed volatiles from the cached solids.
Pieces of Other Worlds - Extraterrestrial Samples for Education and Public Outreach
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2010-01-01
During the Year of the Solar System spacecraft from NASA and our international partners will encounter two comets; orbit the asteroid Vesta, continue to explore Mars with rovers, and launch robotic explorers to the Moon and Mars. We have pieces of all these worlds in our laboratories, and their continued study provides incredibly valuable "ground truth" to complement space exploration missions. Extensive information about these unique materials, as well as actual lunar samples and meteorites, are available for display and education. The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. At the current time JSC curates six types of extraterrestrial samples: (1) Moon rocks and soils collected by the Apollo astronauts (2) Meteorites collected on US expeditions to Antarctica (including rocks from the Moon, Mars, and many asteroids including Vesta) (3) "Cosmic dust" (asteroid and comet particles) collected by high-altitude aircraft (4) Solar wind atoms collected by the Genesis spacecraft (5) Comet particles collected by the Stardust spacecraft (6) Interstellar dust particles collected by the Stardust spacecraft These rocks, soils, dust particles, and atoms continue to be studied intensively by scientists around the world. Descriptions of the samples, research results, thousands of photographs, and information on how to request research samples are on the JSC Curation website: http://curator.jsc.nasa.gov/ NASA provides a limited number of Moon rock samples for either short-term or long-term displays at museums, planetariums, expositions, and professional events that are open to the public. The JSC Public Affairs Office handles requests for such display samples. Requestors should apply in writing to Mr. Louis Parker, JSC Exhibits Manager. Mr. Parker will advise successful applicants regarding provisions for receipt, display, and return of the samples. All loans will be preceded by a signed loan agreement executed between NASA and the requestor's organization. Email address: louis.a.parker@nasa.gov Sets of twelve thin sections of Apollo lunar samples and sets of twelve thin sections of meteorites are available for short-term loan from JSC Curation. The thin sections are designed for use in college and university courses where petrographic microscopes are available for viewing. Requestors should contact the Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov
Advanced Curation Protocols for Mars Returned Sample Handling
NASA Astrophysics Data System (ADS)
Bell, M.; Mickelson, E.; Lindstrom, D.; Allton, J.
Introduction: Johnson Space Center has over 30 years experience handling precious samples which include Lunar rocks and Antarctic meteorites. However, we recognize that future curation of samples from such missions as Genesis, Stardust, and Mars S mple Return, will require a high degree of biosafety combined witha extremely low levels of inorganic, organic, and biological contamination. To satisfy these requirements, research in the JSC Advanced Curation Lab is currently focused toward two major areas: preliminary examination techniques and cleaning and verification techniques . Preliminary Examination Techniques : In order to minimize the number of paths for contamination we are exploring the synergy between human &robotic sample handling in a controlled environment to help determine the limits of clean curation. Within the Advanced Curation Laboratory is a prototype, next-generation glovebox, which contains a robotic micromanipulator. The remotely operated manipulator has six degrees-of- freedom and can be programmed to perform repetitive sample handling tasks. Protocols are being tested and developed to perform curation tasks such as rock splitting, weighing, imaging, and storing. Techniques for sample transfer enabling more detailed remote examination without compromising the integrity of sample science are also being developed . The glovebox is equipped with a rapid transfer port through which samples can be passed without exposure. The transfer is accomplished by using a unique seal and engagement system which allows passage between containers while maintaining a first seal to the outside environment and a second seal to prevent the outside of the container cover and port door from becoming contaminated by the material being transferred. Cleaning and Verification Techniques: As part of the contamination control effort, innovative cleaning techniques are being identified and evaluated in conjunction with sensitive cleanliness verification methods. Towards this end, cleaning techniques such as ultrasonication in ultra -pure water (UPW), oxygen (O2) plasma, and carbon dioxide (CO2) "snow" are being used to clean a variety of different contaminants on a variety of different surfaces. Additionally, once cleaned, techniques to directly verify the s rface cleanliness are being developed. Theseu include X ray photoelectron spectroscopy (XPS) quantification, and screening with- contact angle measure ments , which can be correlated with XPS standards. Methods developed in the Advanced Curation Laboratory will determine the extent to which inorganic and biological contamination can be controlled and minimized.
Zhu, Xue-liang; Tan, Zhan-na; Li, Bo-ying; Wang, Jian-ling; Shi, Jing; Sun, Yan-hui; Li, Xiao- feng; Xu, Jing; Zhang, Xuan-ping; Zhang, Xin; Du, Yu-zhu; Jia, Chun-shieng
2014-09-01
To explore the specific efficacy of different moxibustion techniques in treatment of common diseases and clinical indications, and compare the specificity in clinical indications and efficacy among different moxibustion techniques so as to guide clinical practice better. The modern computerization and data mining technology were adopted to set up moxibustion literature database. The relevant literature of moxibustion techniques in recent 60 years were collected, screened, examined, extracted and analyzed statistically so as to explore the advantages of different moxibustion techniques in clinical treatment. (1) Of 2,516 literature, moxa stick, moxe cone and moxa device were used in the highest frequency in internal medicine department, for 730 times, 278 times and 102 times respectively. The warm needling technique was used in the highest frequency, for 70 times in the surgical department. (2) In the dermatology department, the curative rate with moxa cone was the highest, 75%. In the ear-nose-throat department, the warm needing technique and moxa device achieved the highest curative rate, 49% for both of them. In the internal medicine department and surgical department, the curative rate of warm needling technique was 53% and 58% respectively. In the gynecology department, the curative rate of moxa device was the highest, 59%. In the pediatrics department, the curative rate of moxa cone was the highest, 80%. (3) The numbers of priority disorders, frequency ≥20 times: 24 kinds of disease for moxa stick, five kinds of disease for moxa cone, 2 kinds of disease for warm needling technqiue and one disorder for moxa device. Facial paralysis, diarrhea, lumbar and leg pain and elbow and knee swelling pain were of the highest priority, treated with these 4 moxibustion techniques, with a certain of literature research values. (4) The warm needling technique achieved the better efficacy on elbow and knee swelling pain, lumbar and leg pain and diarrhea compared with the other three techniques and the curative rate was higher. The moxa device tecnique achieved the higher curative rate for facial paralysis compared with the other three techniques. Through the comparison of application frequency, curative rate, clinical application frequency in disorders and the efficacy of priority disorders in the treatment with different moxibustion techniques, it is found that moxa stick, moxa cone and moxa device are simple in manipulation, safe and effective. Hence, they can be extensively used in the treatment of common disorders in every department in clinic. The warm needling technique acts on the body by the co-work of needling and warming stimulation of mugwort. It achieves the particular effect on the disorders with complicated etiologies compared with the other three techniques. It can be chosen in priority for the disorders caused by blockage in meridian and collateral and stagnation of qi and blood.
Managing Quality, Identity and Adversaries in Public Discourse with Machine Learning
ERIC Educational Resources Information Center
Brennan, Michael
2012-01-01
Automation can mitigate issues when scaling and managing quality and identity in public discourse on the web. Discourse needs to be curated and filtered. Anonymous speech has to be supported while handling adversaries. Reliance on human curators or analysts does not scale and content can be missed. These scaling and management issues include the…
Marine Mammal Program: Division of Mammals: Department of Vertebrate
first person who could be termed a curator of marine mammals was Frederick W. True. He had previously Berlin Fisheries Exposition of 1880. True was hired as librarian and acting curator of mammals in the United States National Museum in 1881. True was very active in both exhibits and research, largely
ERIC Educational Resources Information Center
Daugherty, Lindsay; Dossani, Rafiq; Johnson, Erin-Elizabeth; Wright, Cameron
2014-01-01
Providers of early childhood education (ECE) are well positioned to help ensure that technology is used effectively in ECE settings. Indeed, the successful integration of technology into ECE depends on providers who have the ability to curate the most appropriate devices and content, "facilitate" effective patterns of use, guide families…
Smart Mobility Stakeholders - Curating Urban Data & Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sperling, Joshua
This presentation provides an overview of the curation of urban data and models through engaging SMART mobility stakeholders. SMART Mobility Urban Science Efforts are helping to expose key data sets, models, and roles for the U.S. Department of Energy in engaging across stakeholders to ensure useful insights. This will help to support other Urban Science and broader SMART initiatives.
ERIC Educational Resources Information Center
Francis, Leslie J.; Smith, Greg
2016-01-01
Psychological type theory suggests that introverts and extraverts may approach Christian ministry somewhat differently. This theory was tested within the context of a residential workshop attended by 15 curates, 12 of whom were accompanied by their training incumbents. Twelve themes were identified within responses to the question, "What does…
The 12th Curative Factor: Love as an Agent of Healing in Group Psychotherapy.
ERIC Educational Resources Information Center
Bemak, Fred; Epp, Lawrence R.
1996-01-01
Proposes love as a curative factor in group psychotherapy. Transference within a group may originate with needs and desires for love. By unmasking transference, subsequent healing may arise from a process of mourning in which group members recognize how their projection of past love onto other group members and onto the psychotherapist is…
Husereau, Don
2015-01-01
Future perceptions of the value of curative therapies will likely reflect debates happening today about preferences for funding of preventive versus treatment programs, as well as funding orphan drugs. Little is known about how society will value curative therapies versus standard treatments, and the significant role of a host of psychological factors compared to overarching concerns about opportunity costs will likely lead to significant tension between payers and the public. More research to clarify societal preferences and healthcare goals in regards to curative therapies and in light of the potential for significant opportunity costs will be required. Given what we know about preferences for the funding of prevention and treatment measures, we should expect that cures will not be held to a different measure.
OriDB, the DNA replication origin database updated and extended.
Siow, Cheuk C; Nieduszynska, Sian R; Müller, Carolin A; Nieduszynski, Conrad A
2012-01-01
OriDB (http://www.oridb.org/) is a database containing collated genome-wide mapping studies of confirmed and predicted replication origin sites. The original database collated and curated Saccharomyces cerevisiae origin mapping studies. Here, we report that the OriDB database and web site have been revamped to improve user accessibility to curated data sets, to greatly increase the number of curated origin mapping studies, and to include the collation of replication origin sites in the fission yeast Schizosaccharomyces pombe. The revised database structure underlies these improvements and will facilitate further expansion in the future. The updated OriDB for S. cerevisiae is available at http://cerevisiae.oridb.org/ and for S. pombe at http://pombe.oridb.org/.
Clean and Cold Sample Curation
NASA Technical Reports Server (NTRS)
Allen, C. C.; Agee, C. B.; Beer, R.; Cooper, B. L.
2000-01-01
Curation of Mars samples includes both samples that are returned to Earth, and samples that are collected, examined, and archived on Mars. Both kinds of curation operations will require careful planning to ensure that the samples are not contaminated by the instruments that are used to collect and contain them. In both cases, sample examination and subdivision must take place in an environment that is organically, inorganically, and biologically clean. Some samples will need to be prepared for analysis under ultra-clean or cryogenic conditions. Inorganic and biological cleanliness are achievable separately by cleanroom and biosafety lab techniques. Organic cleanliness to the <50 ng/sq cm level requires material control and sorbent removal - techniques being applied in our Class 10 cleanrooms and sample processing gloveboxes.
Curation of Microscopic Astromaterials by NASA: "Gathering Dust Since 1981"
NASA Technical Reports Server (NTRS)
Frank, D. R.; Bastien, R. K.; Rodriguez, M.; Gonzalez, C.; Zolensky, M. E.
2013-01-01
Employing the philosophy that "Small is Beautiful", NASA has been collecting and curating microscopic astromaterials since 1981. These active collections now include interplanetary dust collected in Earth's stratosphere by U-2, ER-2 and WB-57F aircraft (the Cosmic Dust Program - our motto is "Gathering dust since 1981"), comet Wild-2 coma dust (the Stardust Mission), modern interstellar dust (also the Stardust Mission), asteroid Itokawa regolith dust (the Hayabusa Mission - joint curation with JAXA-ISAS), and interplanetary dust impact features on recovered portions of the following spacecraft: Skylab, the Solar Maximum Satellite, the Palapa Satellite, the Long Duration Exposure Facility (LDEF), the MIR Space Station, the International Space Station, and the Hubble Space Telescope (all in the Space Exposed Hardware Laboratory).
Phrenic nerve reconstruction in complete video-assisted thoracic surgery.
Kawashima, Shun; Kohno, Tadasu; Fujimori, Sakashi; Yokomakura, Naoya; Ikeda, Takeshi; Harano, Takashi; Suzuki, Souichiro; Iida, Takahiro; Sakai, Emi
2015-01-01
Primary or metastatic lung cancer or mediastinal tumours may at times involve the phrenic nerve and pericardium. To remove the pathology en bloc, the phrenic nerve must be resected. This results in phrenic nerve paralysis, which in turn reduces pulmonary function and quality of life. As a curative measure of this paralysis and thus a preventive measure against decreased pulmonary function and quality of life, we have performed immediate phrenic nerve reconstruction under complete video-assisted thoracic surgery, and with minimal additional stress to the patient. This study sought to ascertain the utility of this procedure from an evaluation of the cases experienced to date. We performed 6 cases of complete video-assisted thoracic surgery phrenic nerve reconstruction from October 2009 to December 2013 in patients who had undergone phrenic nerve resection or separation to remove tumours en bloc. In all cases, it was difficult to separate the phrenic nerve from the tumour. Reconstruction involved direct anastomosis in 3 cases and intercostal nerve interposition anastomosis in the remaining 3 cases. In the 6 patients (3 men, 3 women; mean age 50.8 years), we performed two right-sided and four left-sided procedures. The mean anastomosis time was 5.3 min for direct anastomosis and 35.3 min for intercostal nerve interposition anastomosis. Postoperative phrenic nerve function was measured on chest X-ray during inspiration and expiration. Direct anastomosis was effective in 2 of the 3 patients, and intercostal nerve interposition anastomosis was effective in all 3 patients. Diaphragm function was confirmed on X-ray to be improved in these 5 patients. Complete video-assisted thoracic surgery phrenic nerve reconstruction was effective for direct anastomosis as well as for intercostal nerve interposition anastomosis in a small sample of selected patients. The procedure shows promise for phrenic nerve reconstruction and further data should be accumulated over time. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Kim, Jin Cheon; Lee, Jong Lyul; Park, Seong Ho
2017-04-01
Since the introduction of indocyanine green angiography more than 25 years ago, few studies have presented interpretative guidelines for indocyanine green fluorescent imaging. We aimed to provide interpretative guidelines for indocyanine green fluorescent imaging through quantitative analysis and to suggest possible indications for indocyanine green fluorescent imaging during robot-assisted sphincter-saving operations. This is a retrospective observational study. This study was conducted at a single center. A cohort of 657 patients with rectal cancer who consecutively underwent curative robot-assisted sphincter-saving operations was enrolled between 2010 and 2016, including 310 patients with indocyanine green imaging (indocyanine green fluorescent imaging+ group) and 347 patients without indocyanine green imaging (indocyanine green fluorescent imaging- group). We tried to quantitatively define the indocyanine green fluorescent imaging findings based on perfusion (mesocolic and colic) time and perfusion intensity (5 grades) to provide probable indications. The anastomotic leakage rate was significantly lower in the indocyanine green fluorescent imaging+ group than in the indocyanine green fluorescent imaging- group (0.6% vs 5.2%) (OR, 0.123; 95% CI, 0.028-0.544; p = 0.006). Anastomotic stricture was closely correlated with anastomotic leakage (p = 0.002) and a short descending mesocolon (p = 0.003). Delayed perfusion (>60 s) and low perfusion intensity (1-2) were more frequently detected in patients with anastomotic stricture and marginal artery defects than in those without these factors (p ≤ 0.001). In addition, perfusion times greater than the mean were more frequently observed in patients aged >58 years, whereas low perfusion intensity was seen more in patients with short descending mesocolon and high ASA classes (≥3). The 300 patients in the indocyanine green fluorescent imaging- group underwent operations 3 years before indocyanine green fluorescent imaging. Quantitative analysis of indocyanine green fluorescent imaging may help prevent anastomotic complications during robot-assisted sphincter-saving operations, and may be of particular value in high-class ASA patients, older patients, and patients with a short descending mesocolon.
Zimmitti, Giuseppe; Manzoni, Alberto; Addeo, Pietro; Garatti, Marco; Zaniboni, Alberto; Bachellier, Philippe; Rosso, Edoardo
2016-04-01
Laparoscopic pancreatoduodenectomy (LPD) is a complex procedure. Critical steps are achieving a negative retroperitoneal margin and re-establishing pancreatoenteric continuity minimizing postoperative pancreatic leak risk. Aiming at increasing the rate of R0 resection during pancreatoduodenectomy, many experienced teams have recommended the superior mesenteric artery (SMA)-first approach, consisting in early identification of the SMA at its origin, with further resection guided by SMA anatomic course. We describe our technique of LPD with SMA-first approach and pancreatogastrostomy assisted by mini-laparotomy. The video concerns a 77-year-old man undergoing our variant of LPD for a 2.5-cm pancreatic head mass. After kocherization, the SMA is identified above the left renocaval confluence and dissected-free from the surrounding tissue. Dissection of the posterior pancreatic aspect exposes the confluence between splenic vein, superior mesenteric vein (SMV), and portal vein. Following duodenal section, the common hepatic artery is dissected and the gastroduodenal artery sectioned at the origin. The first jejunal loop is divided, skeletonized, and passed behind the superior mesenteric vessel. Following pancreatic transection, the uncinate process is dissected from the SMV and the SMA is cleared from retroportal tissue rejoining the previously dissected plain. Laparoscopic choledocojejunostomy is followed by a mini-laparotomy-assisted pancreatogastrostomy, performed as previously described, and a terminolateral gastrojejeunostomy. Twelve patients underwent our variant of LPD (July 2013-May 2015). Female/male ratio was 3:1, median age 65 years (range 57-79), median operation duration 590 min (580-690), intraoperative blood loss 150 cl (100-250). R0 resection rate was 100 %, and the median number of resected lymph nodes was 24 (22-28). Postoperative complications were grade II in two patients and IIIa in one. Median postoperative length of stay was 16 days (14-21). LPD with SMA-first approach with pancreatogastrostomy assisted by a mini-laparotomy well combines the benefits of laparoscopy with low risk of postoperative complications and high rate of curative resection.
Community Intelligence in Knowledge Curation: An Application to Managing Scientific Nomenclature
Zou, Dong; Li, Ang; Liu, Guocheng; Chen, Fei; Wu, Jiayan; Xiao, Jingfa; Wang, Xumin; Yu, Jun; Zhang, Zhang
2013-01-01
Harnessing community intelligence in knowledge curation bears significant promise in dealing with communication and education in the flood of scientific knowledge. As knowledge is accumulated at ever-faster rates, scientific nomenclature, a particular kind of knowledge, is concurrently generated in all kinds of fields. Since nomenclature is a system of terms used to name things in a particular discipline, accurate translation of scientific nomenclature in different languages is of critical importance, not only for communications and collaborations with English-speaking people, but also for knowledge dissemination among people in the non-English-speaking world, particularly young students and researchers. However, it lacks of accuracy and standardization when translating scientific nomenclature from English to other languages, especially for those languages that do not belong to the same language family as English. To address this issue, here we propose for the first time the application of community intelligence in scientific nomenclature management, namely, harnessing collective intelligence for translation of scientific nomenclature from English to other languages. As community intelligence applied to knowledge curation is primarily aided by wiki and Chinese is the native language for about one-fifth of the world’s population, we put the proposed application into practice, by developing a wiki-based English-to-Chinese Scientific Nomenclature Dictionary (ESND; http://esnd.big.ac.cn). ESND is a wiki-based, publicly editable and open-content platform, exploiting the whole power of the scientific community in collectively and collaboratively managing scientific nomenclature. Based on community curation, ESND is capable of achieving accurate, standard, and comprehensive scientific nomenclature, demonstrating a valuable application of community intelligence in knowledge curation. PMID:23451119
Curating Big Data Made Simple: Perspectives from Scientific Communities.
Sowe, Sulayman K; Zettsu, Koji
2014-03-01
The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.
Ahmadi, Marzieh; Rad, Abolfazl Khajavi; Rajaei, Ziba; Hadjzadeh, Mousa-Al-Reza; Mohammadian, Nema; Tabasi, Nafiseh Sadat
2012-01-01
Introduction: Alcea rosea L. is used in Asian folk medicine as a remedy for a wide range of ailments. The aim of the present study was to investigate the effect of hydroalcoholic extract of Alcea rosea roots on ethylene glycol-induced kidney calculi in rats. Materials and Methods: Male Wistar rats were randomly divided into control, ethylene glycol (EG), curative and preventive groups. Control group received tap drinking water for 28 days. Ethylene glycol (EG), curative and preventive groups received 1% ethylene glycol for induction of calcium oxalate (CaOx) calculus formation; preventive and curative subjects also received the hydroalcoholic extract of Alcea rosea roots in drinking water at dose of 170 mg/kg, since day 0 or day 14, respectively. Urinary oxalate concentration was measured by spectrophotometer on days 0, 14 and 28. On day 28, the kidneys were removed and examined histopathologically under light microscopy for counting the calcium oxalate deposits in 50 microscopic fields. Results: In both preventive and curative protocols, treatment of rats with hydroalcoholic extract of Alcea rosea roots significantly reduced the number of kidney calcium oxalate deposits compared to ethylene glycol group. Administration of Alcea rosea extract also reduced the elevated urinary oxalate due to ethylene glycol. Conclusion: Alcea rosea showed a beneficial effect in preventing and eliminating calcium oxalate deposition in the rat kidney. This effect is possibly due to diuretic and anti-inflammatory effects or presence of mucilaginous polysaccharides in the plant. It may also be related to lowering of urinary concentration of stone-forming constituents. PMID:22701236
Community intelligence in knowledge curation: an application to managing scientific nomenclature.
Dai, Lin; Xu, Chao; Tian, Ming; Sang, Jian; Zou, Dong; Li, Ang; Liu, Guocheng; Chen, Fei; Wu, Jiayan; Xiao, Jingfa; Wang, Xumin; Yu, Jun; Zhang, Zhang
2013-01-01
Harnessing community intelligence in knowledge curation bears significant promise in dealing with communication and education in the flood of scientific knowledge. As knowledge is accumulated at ever-faster rates, scientific nomenclature, a particular kind of knowledge, is concurrently generated in all kinds of fields. Since nomenclature is a system of terms used to name things in a particular discipline, accurate translation of scientific nomenclature in different languages is of critical importance, not only for communications and collaborations with English-speaking people, but also for knowledge dissemination among people in the non-English-speaking world, particularly young students and researchers. However, it lacks of accuracy and standardization when translating scientific nomenclature from English to other languages, especially for those languages that do not belong to the same language family as English. To address this issue, here we propose for the first time the application of community intelligence in scientific nomenclature management, namely, harnessing collective intelligence for translation of scientific nomenclature from English to other languages. As community intelligence applied to knowledge curation is primarily aided by wiki and Chinese is the native language for about one-fifth of the world's population, we put the proposed application into practice, by developing a wiki-based English-to-Chinese Scientific Nomenclature Dictionary (ESND; http://esnd.big.ac.cn). ESND is a wiki-based, publicly editable and open-content platform, exploiting the whole power of the scientific community in collectively and collaboratively managing scientific nomenclature. Based on community curation, ESND is capable of achieving accurate, standard, and comprehensive scientific nomenclature, demonstrating a valuable application of community intelligence in knowledge curation.
Sharing Responsibility for Data Stewardship Between Scientists and Curators
NASA Astrophysics Data System (ADS)
Hedstrom, M. L.
2012-12-01
Data stewardship is becoming increasingly important to support accurate conclusions from new forms of data, integration of and computation across heterogeneous data types, interactions between models and data, replication of results, data governance and long-term archiving. In addition to increasing recognition of the importance of data management, data science, and data curation by US and international scientific agencies, the National Academies of Science Board on Research Data and Information is sponsoring a study on Data Curation Education and Workforce Issues. Effective data stewardship requires a distributed effort among scientists who produce data, IT staff and/or vendors who provide data storage and computational facilities and services, and curators who enhance data quality, manage data governance, provide access to third parties, and assume responsibility for long-term archiving of data. The expertise necessary for scientific data management includes a mix of knowledge of the scientific domain; an understanding of domain data requirements, standards, ontologies and analytical methods; facility with leading edge information technology; and knowledge of data governance, standards, and best practices for long-term preservation and access that rarely are found in a single individual. Rather than developing data science and data curation as new and distinct occupations, this paper examines the set of tasks required for data stewardship. The paper proposes an alternative model that embeds data stewardship in scientific workflows and coordinates hand-offs between instruments, repositories, analytical processing, publishers, distributors, and archives. This model forms the basis for defining knowledge and skill requirements for specific actors in the processes required for data stewardship and the corresponding educational and training needs.
Müller, Bettina; Sola, José A; Carcamo, Marcela; Ciudad, Ana M; Trujillo, Cristian; Cerda, Berta
2013-01-01
Gallbladder cancer (GBC) is the second leading cause of cancer death in women in Chile. Even after curative surgery, prognosis is grim. To evaluate acute and late toxicity and efficacy of adjuvant chemoradiation (CRT) after curatively resected GBC. We retrospectively analyzed the cohort of patients diagnosed between January 1999 and December 2009, treated with adjuvant CRT at our institution. Treatment protocol considered external beam radiation (RT) (45-54 Gy) to tumor bed and regional lymph nodes with or without concurrent 5-fluorouracil (5-FU) (500 mg/m2/day by 120-hours continuous infusion on days 1-5 and 29-33). Data was obtained from medical records, mortality from death certificates. Survival was estimated by Kaplan- Meier curves. 46 patients with curatively resected GBC received adjuvant CRT. Median age was 57 years (range 33-76); 39 patients were female. After diagnosis, a second surgery was performed in 42 patients. Cholecystectomy with hepatic segmentectomy and lymphadenectomy was the curative surgery in 41 patients. All patients received RT with a planned dose of 45 Gy in 25 fractions, 11 patients received a boost to the tumor bed up to 54 Gy and 34 patients had concurrent 5-FU. Therapy was well tolerated. Five patients experienced grade 3 toxicities. No grade 4 or 5 toxicity was observed. No grade >2 late toxicity was observed. Three- and 5-year overall survival (OS) were 57% and 51%, respectively. Adjuvant chemoradiation is well tolerated and might impact favorably on survival in patients with curatively resected GBC.
Curation and Analysis of Samples from Comet Wild-2 Returned by NASA's Stardust Mission
NASA Technical Reports Server (NTRS)
Nakamura-Messenger, Keiko; Walker, Robert M.
2015-01-01
The NASA Stardust mission returned the first direct samples of a cometary coma from comet 81P/Wild-2 in 2006. Intact capture of samples encountered at 6 km/s was enabled by the use of aerogel, an ultralow dense silica polymer. Approximately 1000 particles were captured, with micron and submicron materials distributed along mm scale length tracks. This sample collection method and the fine scale of the samples posed new challenges to the curation and cosmochemistry communities. Sample curation involved extensive, detailed photo-documentation and delicate micro-surgery to remove particles without loss from the aerogel tracks. This work had to be performed in highly clean facility to minimize the potential of contamination. JSC Curation provided samples ranging from entire tracks to micrometer-sized particles to external investigators. From the analysis perspective, distinguishing cometary materials from aerogel and identifying the potential alteration from the capture process were essential. Here, transmission electron microscopy (TEM) proved to be the key technique that would make this possible. Based on TEM work by ourselves and others, a variety of surprising findings were reported, such as the observation of high temperature phases resembling those found in meteorites, rarely intact presolar grains and scarce organic grains and submicrometer silicates. An important lesson from this experience is that curation and analysis teams must work closely together to understand the requirements and challenges of each task. The Stardust Mission also has laid important foundation to future sample returns including OSIRIS-REx and Hayabusa II and future cometary nucleus sample return missions.
Agile Data Curation: A conceptual framework and approach for practitioner data management
NASA Astrophysics Data System (ADS)
Young, J. W.; Benedict, K. K.; Lenhardt, W. C.
2015-12-01
Data management occurs across a range of science and related activities such as decision-support. Exemplars within the science community operate data management systems that are extensively planned before implementation, staffed with robust data management expertise, equipped with appropriate services and technologies, and often highly structured. However, this is not the only approach to data management and almost certainly not the typical experience. The other end of the spectrum is often an ad hoc practitioner team, with changing requirements, limited training in data management, and resource constrained for both equipment and human resources. Much of the existing data management literature serves the exemplar community and ignores the ad hoc practitioners. Somewhere in the middle are examples where data are repurposed for new uses thereby generating new data management challenges. This submission presents a conceptualization of an Agile Data Curation approach that provides foundational principles for data management efforts operating across the spectrum of data generation and use from large science systems to efforts with constrained resources, limited expertise, and evolving requirements. The underlying principles to Agile Data Curation are a reapplication of agile software development principles to data management. The historical reality for many data management efforts is operating in a practioner environment so Agile Data Curation utilizes historical and current case studies to validate the foundational principles and through comparison learn lessons for future application. This submission will provide an overview of the Agile Data Curation, cover the foundational principles to the approach, and introduce a framework for gathering, classifying, and applying lessons from case studies of practitioner data management.
NASA Technical Reports Server (NTRS)
Maurette, Michel; Hammer, C.; Harvey, R.; Immel, G.; Kurat, G.; Taylor, S.
1994-01-01
In a companion paper, Zolensky discusses interplanetary dust particles (IDP's) collected in the stratosphere. Here, we describe the recovery of much larger unmelted to partially melted IDP's from the Greenland and Antarctica ice sheet, and discuss problems arising in their collection and curation, as well as future prospects for tackling these problems.
With the increasing need to leverage data and models to perform cutting edge analyses within the environmental science community, collection and organization of that data into a readily accessible format for consumption is a pressing need. The EPA CompTox chemical dashboard is i...
Interview with Smithsonian NASM Spacesuit Curator Dr. Cathleen Lewis
NASA Technical Reports Server (NTRS)
Lewis, Cathleen; Wright, Rebecca
2012-01-01
Dr. Cathleen Lewis was interviewed by Rebecca Wright during the presentation of an "Interview with Smithsonian NASM Spacesuit Curator Dr. Cathleen Lewis" on May 14, 2012. Topics included the care, size, and history of the spacesuit collection at the Smithsonian and the recent move to the state-of-the-art permanent storage facility at the Udvar-Hazy facility in Virginia.
ERIC Educational Resources Information Center
Church, Earnie Mitchell, Jr.
2013-01-01
In the last couple of years, a new aspect of online social networking has emerged, in which the strength of social network connections is based not on social ties but mutually shared interests. This dissertation studies these "curation-based" online social networks (CBN) and their suitability for the diffusion of electronic word-of-mouth…
ERIC Educational Resources Information Center
Schrand, Tom; Jones, Katharine; Hanson, Valerie
2018-01-01
By embedding an ePortfolio process in a general education core that culminates with a senior capstone course, Thomas Jefferson University has created an opportunity for students to use their completed ePortfolios as archives of primary sources that they can curate to produce narratives about their intellectual development. The result was a…
Plant Reactome: a resource for plant pathways and comparative analysis
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D.; Wu, Guanming; Fabregat, Antonio; Elser, Justin L.; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D.; Ware, Doreen; Jaiswal, Pankaj
2017-01-01
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. PMID:27799469
Chandonia, John-Marc; Fox, Naomi K; Brenner, Steven E
2017-02-03
SCOPe (Structural Classification of Proteins-extended, http://scop.berkeley.edu) is a database of relationships between protein structures that extends the Structural Classification of Proteins (SCOP) database. SCOP is an expert-curated ordering of domains from the majority of proteins of known structure in a hierarchy according to structural and evolutionary relationships. SCOPe classifies the majority of protein structures released since SCOP development concluded in 2009, using a combination of manual curation and highly precise automated tools, aiming to have the same accuracy as fully hand-curated SCOP releases. SCOPe also incorporates and updates the ASTRAL compendium, which provides several databases and tools to aid in the analysis of the sequences and structures of proteins classified in SCOPe. SCOPe continues high-quality manual classification of new superfamilies, a key feature of SCOP. Artifacts such as expression tags are now separated into their own class, in order to distinguish them from the homology-based annotations in the remainder of the SCOPe hierarchy. SCOPe 2.06 contains 77,439 Protein Data Bank entries, double the 38,221 structures classified in SCOP. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Guidelines for the functional annotation of microRNAs using the Gene Ontology
D'Eustachio, Peter; Smith, Jennifer R.; Zampetaki, Anna
2016-01-01
MicroRNA regulation of developmental and cellular processes is a relatively new field of study, and the available research data have not been organized to enable its inclusion in pathway and network analysis tools. The association of gene products with terms from the Gene Ontology is an effective method to analyze functional data, but until recently there has been no substantial effort dedicated to applying Gene Ontology terms to microRNAs. Consequently, when performing functional analysis of microRNA data sets, researchers have had to rely instead on the functional annotations associated with the genes encoding microRNA targets. In consultation with experts in the field of microRNA research, we have created comprehensive recommendations for the Gene Ontology curation of microRNAs. This curation manual will enable provision of a high-quality, reliable set of functional annotations for the advancement of microRNA research. Here we describe the key aspects of the work, including development of the Gene Ontology to represent this data, standards for describing the data, and guidelines to support curators making these annotations. The full microRNA curation guidelines are available on the GO Consortium wiki (http://wiki.geneontology.org/index.php/MicroRNA_GO_annotation_manual). PMID:26917558
Kubo, Shoji; Takemura, Shigekazu; Tanaka, Shogo; Shinkawa, Hiroji; Nishioka, Takayoshi; Nozawa, Akinori; Kinoshita, Masahiko; Hamano, Genya; Ito, Tokuji; Urata, Yorihisa
2015-01-01
Although liver resection is considered the most effective treatment for hepatocellular carcinoma (HCC), treatment outcomes are unsatisfactory because of the high rate of HCC recurrence. Since we reported hepatitis B e-antigen positivity and high serum hepatitis B virus (HBV) DNA concentrations are strong risk factors for HCC recurrence after curative resection of HBV-related HCC in the early 2000s, many investigators have demonstrated the effects of viral status on HCC recurrence and post-treatment outcomes. These findings suggest controlling viral status is important to prevent HCC recurrence and improve survival after curative treatment for HBV-related HCC. Antiviral therapy after curative treatment aims to improve prognosis by preventing HCC recurrence and maintaining liver function. Therapy with interferon and nucleos(t)ide analogs may be useful for preventing HCC recurrence and improving overall survival in patients who have undergone curative resection for HBV-related HCC. In addition, reactivation of viral replication can occur after liver resection for HBV-related HCC. Antiviral therapy can be recommended for patients to prevent HBV reactivation. Nevertheless, further studies are required to establish treatment guidelines for patients with HBV-related HCC. PMID:26217076
Zhang, Z
1989-12-01
Cf, the methodology and diagnostic standard of 12 collaborative units about "Epidemiological investigation" of 1982, we traced to investigate the relation between the patients' condition outcome and drug maintain therapy of 324 cases with schizophrenia in community. The investigative result showed the cure rate of insisting on taking medicine group was 25.21%, the effective rate was 97.48%, the cure rate of irregular taking medicine groups was 6.63%, the effective rate was 68.37%, there was remarkable difference between the cure rate and the effective rate in two groups. Otherwise we also compared the patients, condition of insisting on taking drug groups with during investigation. We found there was no remarkable change that showed insisting a drug maintain therapy out the hospital to the curative effect of the disease to possess on important meaning. The pattern also compared the curative effect of a time onset of disease group and many times. There was no remarkable difference about the statistical analysis of the curative effect among each group. It showed me never lose confidence to the patients. We should treat actively them.
A computational platform to maintain and migrate manual functional annotations for BioCyc databases.
Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A
2014-10-12
BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.
Distribution and utilization of curative primary healthcare services in Lahej, Yemen.
Bawazir, A A; Bin Hawail, T S; Al-Sakkaf, K A Z; Basaleem, H O; Muhraz, A F; Al-Shehri, A M
2013-09-01
No evidence-based data exist on the availability, accessibility and utilization of healthcare services in Lahej Governorate, Yemen. The aim of this study was to assess the distribution and utilization of curative services in primary healthcare units and centres in Lahej. Cross-sectional study (clustering sample). This study was conducted in three of the 15 districts in Lahej between December 2009 and August 2010. Household members were interviewed using a questionnaire to determine sociodemographic characteristics and types of healthcare services available in the area. The distribution of health centres, health units and hospitals did not match the size of the populations or areas of the districts included in this study. Geographical accessibility was the main obstacle to utilization. Factors associated with the utilization of curative services were significantly related to the time required to reach the nearest facility, seeking curative services during illness and awareness of the availability of health facilities (P < 0.01). There is an urgent need to look critically and scientifically at the distribution of healthcare services in the region in order to ensure accessibility and quality of services. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
LitVar: a semantic search engine for linking genomic variant data in PubMed and PMC.
Allot, Alexis; Peng, Yifan; Wei, Chih-Hsuan; Lee, Kyubum; Phan, Lon; Lu, Zhiyong
2018-05-14
The identification and interpretation of genomic variants play a key role in the diagnosis of genetic diseases and related research. These tasks increasingly rely on accessing relevant manually curated information from domain databases (e.g. SwissProt or ClinVar). However, due to the sheer volume of medical literature and high cost of expert curation, curated variant information in existing databases are often incomplete and out-of-date. In addition, the same genetic variant can be mentioned in publications with various names (e.g. 'A146T' versus 'c.436G>A' versus 'rs121913527'). A search in PubMed using only one name usually cannot retrieve all relevant articles for the variant of interest. Hence, to help scientists, healthcare professionals, and database curators find the most up-to-date published variant research, we have developed LitVar for the search and retrieval of standardized variant information. In addition, LitVar uses advanced text mining techniques to compute and extract relationships between variants and other associated entities such as diseases and chemicals/drugs. LitVar is publicly available at https://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/LitVar.
[Two Cases of Curative Resection of Locally Advanced Rectal Cancer after Preoperative Chemotherapy].
Mitsuhashi, Noboru; Shimizu, Yoshiaki; Kuboki, Satoshi; Yoshitomi, Hideyuki; Kato, Atsushi; Ohtsuka, Masayuki; Shimizu, Hiroaki; Miyazaki, Masaru
2015-11-01
Reports of conversion in cases of locally advanced colorectal cancer have been increasing. Here, we present 2 cases in which curative resection of locally advanced rectal cancer accompanied by intestinal obstruction was achieved after establishing a stoma and administering chemotherapy. The first case was of a 46-year-old male patient diagnosed with upper rectal cancer and intestinal obstruction. Because of a high level of retroperitoneal invasion, after establishing a sigmoid colostomy, 13 courses of mFOLFOX6 plus Pmab were administered. Around 6 months after the initial surgery, low anterior resection for rectal cancer and surgery to close the stoma were performed. Fourteen days after curative resection, the patient was discharged from the hospital. The second case was of a 66-year-old male patient with a circumferential tumor extending from Rs to R, accompanied by right ureter infiltration and sub-intestinal obstruction. After establishing a sigmoid colostomy, 11 courses of mFOLFOX6 plus Pmab were administered. Five months after the initial surgery, anterior resection of the rectum and surgery to close the stoma were performed. Twenty days after curative resection, the patient was released from the hospital. No recurrences have been detected in either case.
Nuts and Bolts - Techniques for Genesis Sample Curation
NASA Technical Reports Server (NTRS)
Burkett, Patti J.; Rodriquez, M. C.; Allton, J. H.
2011-01-01
The Genesis curation staff at NASA Johnson Space Center provides samples and data for analysis to the scientific community, following allocation approval by the Genesis Oversight Committee, a sub-committee of CAPTEM (Curation Analysis Planning Team for Extraterrestrial Materials). We are often asked by investigators within the scientific community how we choose samples to best fit the requirements of the request. Here we will demonstrate our techniques for characterizing samples and satisfying allocation requests. Even with a systematic approach, every allocation is unique. We are also providing updated status of the cataloging and characterization of solar wind collectors as of January 2011. The collection consists of 3721 inventoried samples consisting of a single fragment, or multiple fragments containerized or pressed between post-it notes, jars or vials of various sizes.
PomBase: a comprehensive online resource for fission yeast
Wood, Valerie; Harris, Midori A.; McDowall, Mark D.; Rutherford, Kim; Vaughan, Brendan W.; Staines, Daniel M.; Aslett, Martin; Lock, Antonia; Bähler, Jürg; Kersey, Paul J.; Oliver, Stephen G.
2012-01-01
PomBase (www.pombase.org) is a new model organism database established to provide access to comprehensive, accurate, and up-to-date molecular data and biological information for the fission yeast Schizosaccharomyces pombe to effectively support both exploratory and hypothesis-driven research. PomBase encompasses annotation of genomic sequence and features, comprehensive manual literature curation and genome-wide data sets, and supports sophisticated user-defined queries. The implementation of PomBase integrates a Chado relational database that houses manually curated data with Ensembl software that supports sequence-based annotation and web access. PomBase will provide user-friendly tools to promote curation by experts within the fission yeast community. This will make a key contribution to shaping its content and ensuring its comprehensiveness and long-term relevance. PMID:22039153
Korpan, Nikolai N; Xu, Kecheng; Schwarzinger, Philipp; Watanabe, Masashi; Breitenecker, Gerhard; Patrick, Le Pivert
2018-01-01
The aim of the study was to perform cryosurgery on a primary breast tumor, coupled with simultaneous peritumoral and intratumoral tracer injection of a blue dye, to evaluate lymphatic mapping. We explored the ability of our strategy to prevent tumor cells, but not that of injected tracers, to migrate to the lymphovascular drainage during conventional resection of frozen breast malignancies. Seventeen patients aged 51 (14) years (mean [standard deviation]), presenting primary breast cancer with stage I to IV, were randomly selected and treated in The Rudolfinerhaus Private Clinic in Vienna, Austria, and included in this preliminary clinical study. Under intraoperative ultrasound, 14 patients underwent curative cryo-assisted tumor resection en bloc, coupled with peritumoral tracer injection, which consisted of complete tumor freezing and concomitant peritumor injection with a blue dye, before resection and sentinel lymph node dissection (group A). Group B consists of 3 patients previously refused any standard therapy and had palliative tumor cryoablation in situ combined with intratumoral tracer injection. The intraoperative ultrasound facilitated needle positioning and dye injection timing. In group A, the frozen site extruded the dye that was distributed through the unfrozen tumor, the breast tissue, and the resection cavity for 12 patients. One to 4 lymph nodes were stained for 10 of 14 patients. The resection margin was evaluable. Our intraoperative ultrasound-guided performance revealed the injection and migration of a blue dye during the frozen resection en bloc and cryoablation in situ of primary breast tumors. Sentinel lymph node mapping, pathological determination of the tumor, and resection margins were achievable. The study paves the way for intraoperative cryo-assisted therapeutic strategies for breast cancer.
Drug Evaluation in the Plasmodium Falciparum - Aotus Model
1986-10-01
blood schizonticidal/curative activity of experimental antimalarial drugs. WR 245082, an acridineainine, at similar doses cured infections of chloroquine ...Guinea - Chesson strain). The curative activity of WR 245082, an acridineamine, for chloroquine - sensitive and chloroquine -resistant strains of P...antimalarial activity of two analogues of the amino acid histidine was assessed against infections of the Uganda Palo Alto strain. WR 251853, 2-fluoro-l
ERIC Educational Resources Information Center
Carlson, Jake; Stowell-Bracke, Marianne
2013-01-01
Libraries are actively seeking to identify and respond to the data management and curation needs of researchers. One important perspective often overlooked in acquiring an understanding is that of graduate students. This case study used the Data Curation Profile Toolkit to interview six graduate students working for agronomy researchers at the…
Apollo Missions to the Lunar Surface
NASA Technical Reports Server (NTRS)
Graff, Paige V.
2018-01-01
Six Apollo missions to the Moon, from 1969-1972, enabled astronauts to collect and bring lunar rocks and materials from the lunar surface to Earth. Apollo lunar samples are curated by NASA Astromaterials at the NASA Johnson Space Center in Houston, TX. Samples continue to be studied and provide clues about our early Solar System. Learn more and view collected samples at: https://curator.jsc.nasa.gov/lunar.
Gaĭdarov, G M; Alekseeva, N Iu; Latysheva, E A
2010-01-01
The article deals with the technique of economic analysis of effectiveness of functioning of multi-profile curative preventive medical institution in the conditions of transition to the payment according the completed case of treatment. The necessity of the measures targeted to prevent the financial losses under the new form of payment for hospital care is proved.
[The mean timing of periodontic care rendering].
Zorina, O A; Abaev, Z M; Domashev, D I; Boriskina, O A
2012-01-01
The time-studies demonstrated that the periodontologist spend 30.3 +/- 2.6 minutes on the primary ambulatory visit of patient and 16.4 +/- 0.9 minutes on the revisit of patient (non-registering time spending on preventive and curative activities). Time spending on curative preventive activities in each group of patients with periodontal diseases depended on both the severity of inflammatory destructive processes in periodontium and therapy stage.
Zhang, Xu-Feng; Bagante, Fabio; Chen, Qinyu; Beal, Eliza W; Lv, Yi; Weiss, Matthew; Popescu, Irinel; Marques, Hugo P; Aldrighetti, Luca; Maithel, Shishir K; Pulitano, Carlo; Bauer, Todd W; Shen, Feng; Poultsides, George A; Soubrane, Olivier; Martel, Guillaume; Koerkamp, B Groot; Guglielmi, Alfredo; Itaru, Endo; Pawlik, Timothy M
2018-05-01
Intrahepatic cholangiocarcinoma with hepatic hilus involvement has been either classified as intrahepatic cholangiocarcinoma or hilar cholangiocarcinoma. The present study aimed to investigate the clinicopathologic characteristics and short- and long-term outcomes after curative resection for hilar type intrahepatic cholangiocarcinoma in comparison with peripheral intrahepatic cholangiocarcinoma and hilar cholangiocarcinoma. A total of 912 patients with mass-forming peripheral intrahepatic cholangiocarcinoma, 101 patients with hilar type intrahepatic cholangiocarcinoma, and 159 patients with hilar cholangiocarcinoma undergoing curative resection from 2000 to 2015 were included from two multi-institutional databases. Clinicopathologic characteristics and short- and long-term outcomes were compared among the 3 groups. Patients with hilar type intrahepatic cholangiocarcinoma had more aggressive tumor characteristics (eg, higher frequency of vascular invasion and lymph nodes metastasis) and experienced more extensive resections in comparison with either peripheral intrahepatic cholangiocarcinoma or hilar cholangiocarcinoma patients. The odds of lymphadenectomy and R0 resection rate among patients with hilar type intrahepatic cholangiocarcinoma were comparable with hilar cholangiocarcinoma patients, but higher than peripheral intrahepatic cholangiocarcinoma patients (lymphadenectomy incidence, 85.1% vs 42.5%, P < .001; R0 rate, 75.2% vs 88.8%, P < .001). After curative surgery, patients with hilar type intrahepatic cholangiocarcinoma experienced a higher rate of technical-related complications compared with peripheral intrahepatic cholangiocarcinoma patients. Of note, hilar type intrahepatic cholangiocarcinoma was associated with worse disease-specific survival and recurrence-free survival after curative resection versus peripheral intrahepatic cholangiocarcinoma (median disease-specific survival, 26.0 vs 54.0 months, P < .001; median recurrence-free survival, 13.0 vs 18.0 months, P = .021) and hilar cholangiocarcinoma (median disease-specific survival, 26.0 vs 49.0 months, P = .003; median recurrence-free survival, 13.0 vs 33.4 months, P < .001). Mass-forming intrahepatic cholangiocarcinoma with hepatic hilus involvement is a more aggressive type of cholangiocarcinoma, which showed distinct clinicopathologic characteristics, worse long-term outcomes after curative resection, in comparison with peripheral intrahepatic cholangiocarcinoma and hilar cholangiocarcinoma. Copyright © 2018 Elsevier Inc. All rights reserved.
Dahdul, Wasila M; Balhoff, James P; Engeman, Jeffrey; Grande, Terry; Hilton, Eric J; Kothari, Cartik; Lapp, Hilmar; Lundberg, John G; Midford, Peter E; Vision, Todd J; Westerfield, Monte; Mabee, Paula M
2010-05-20
The wealth of phenotypic descriptions documented in the published articles, monographs, and dissertations of phylogenetic systematics is traditionally reported in a free-text format, and it is therefore largely inaccessible for linkage to biological databases for genetics, development, and phenotypes, and difficult to manage for large-scale integrative work. The Phenoscape project aims to represent these complex and detailed descriptions with rich and formal semantics that are amenable to computation and integration with phenotype data from other fields of biology. This entails reconceptualizing the traditional free-text characters into the computable Entity-Quality (EQ) formalism using ontologies. We used ontologies and the EQ formalism to curate a collection of 47 phylogenetic studies on ostariophysan fishes (including catfishes, characins, minnows, knifefishes) and their relatives with the goal of integrating these complex phenotype descriptions with information from an existing model organism database (zebrafish, http://zfin.org). We developed a curation workflow for the collection of character, taxonomic and specimen data from these publications. A total of 4,617 phenotypic characters (10,512 states) for 3,449 taxa, primarily species, were curated into EQ formalism (for a total of 12,861 EQ statements) using anatomical and taxonomic terms from teleost-specific ontologies (Teleost Anatomy Ontology and Teleost Taxonomy Ontology) in combination with terms from a quality ontology (Phenotype and Trait Ontology). Standards and guidelines for consistently and accurately representing phenotypes were developed in response to the challenges that were evident from two annotation experiments and from feedback from curators. The challenges we encountered and many of the curation standards and methods for improving consistency that we developed are generally applicable to any effort to represent phenotypes using ontologies. This is because an ontological representation of the detailed variations in phenotype, whether between mutant or wildtype, among individual humans, or across the diversity of species, requires a process by which a precise combination of terms from domain ontologies are selected and organized according to logical relations. The efficiencies that we have developed in this process will be useful for any attempt to annotate complex phenotypic descriptions using ontologies. We also discuss some ramifications of EQ representation for the domain of systematics.
The Hayabusa Curation Facility at Johnson Space Center
NASA Technical Reports Server (NTRS)
Zolensky, M.; Bastien, R.; McCann, B.; Frank, D.; Gonzalez, C.; Rodriguez, M.
2013-01-01
The Japan Aerospace Exploration Agency (JAXA) Hayabusa spacecraft made contact with the asteroid 25143 Itokawa and collected regolith dust from Muses Sea region of smooth terrain [1]. The spacecraft returned to Earth with more than 10,000 grains ranging in size from just over 300 µm to less than 10 µm [2, 3]. These grains represent the only collection of material returned from an asteroid by a spacecraft. As part of the joint agreement between JAXA and NASA for the mission, 10% of the Hayabusa grains are being transferred to NASA for parallel curation and allocation. In order to properly receive process and curate these samples, a new curation facility was established at Johnson Space Center (JSC). Since the Hayabusa samples within the JAXA curation facility have been stored free from exposure to terrestrial atmosphere and contamination [4], one of the goals of the new NASA curation facility was to continue this treatment. An existing lab space at JSC was transformed into a 120 sq.ft. ISO class 4 (equivalent to the original class 10 standard) clean room. Hayabusa samples are stored, observed, processed, and packaged for allocation inside a stainless steel glove box under dry N2. Construction of the clean laboratory was completed in 2012. Currently, 25 Itokawa particles are lodged in NASA's Hayabusa Lab. Special care has been taken during lab construction to remove or contain materials that may contribute contaminant particles in the same size range as the Hayabusa grains. Several witness plates of various materials are installed around the clean lab and within the glove box to permit characterization of local contaminants at regular intervals by SEM and mass spectrometry, and particle counts of the lab environment are frequently acquired. Of particular interest is anodized aluminum, which contains copious sub-mm grains of a multitude of different materials embedded in its upper surface. Unfortunately the use of anodized aluminum was necessary in the construction of the clean room frame to strengthen it and eliminate corrosion and wear over time. All anodized aluminum interior to the lab was thus covered or replaced by minimally contaminating materials.
Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining.
Hettne, Kristina M; Williams, Antony J; van Mulligen, Erik M; Kleinjans, Jos; Tkachenko, Valery; Kors, Jan A
2010-03-23
Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist.
NASA Astrophysics Data System (ADS)
Pignol, C.; Arnaud, F.; Godinho, E.; Galabertier, B.; Caillo, A.; Billy, I.; Augustin, L.; Calzas, M.; Rousseau, D. D.; Crosta, X.
2016-12-01
Managing scientific data is probably one the most challenging issues in modern science. In plaeosciences the question is made even more sensitive with the need of preserving and managing high value fragile geological samples: cores. Large international scientific programs, such as IODP or ICDP led intense effort to solve this problem and proposed detailed high standard work- and dataflows thorough core handling and curating. However many paleoscience results derived from small-scale research programs in which data and sample management is too often managed only locally - when it is… In this paper we present a national effort leads in France to develop an integrated system to curate ice and sediment cores. Under the umbrella of the national excellence equipment program CLIMCOR, we launched a reflexion about core curating and the management of associated fieldwork data. Our aim was then to conserve all data from fieldwork in an integrated cyber-environment which will evolve toward laboratory-acquired data storage in a near future. To do so, our demarche was conducted through an intimate relationship with field operators as well laboratory core curators in order to propose user-oriented solutions. The national core curating initiative proposes a single web portal in which all teams can store their fieldwork data. This portal is used as a national hub to attribute IGSNs. For legacy samples, this requires the establishment of a dedicated core list with associated metadata. However, for forthcoming core data, we developed a mobile application to capture technical and scientific data directly on the field. This application is linked with a unique coring-tools library and is adapted to most coring devices (gravity, drilling, percussion etc.) including multiple sections and holes coring operations. Those field data can be uploaded automatically to the national portal, but also referenced through international standards (IGSN and INSPIRE) and displayed in international portals (currently, NOAA's IMLGS). In this paper, we present the architecture of the integrated system, future perspectives and the approach we adopted to reach our goals. We will also present our mobile application through didactic examples.
NASA Astrophysics Data System (ADS)
Ramdeen, S.; Hangsterfer, A.; Stanley, V. L.
2017-12-01
There is growing enthusiasm for curation of physical samples in the Earth Science community (see sessions at AGU, GSA, ESIP). Multiple federally funded efforts aim to develop best practices for curation of physical samples; however, these efforts have not yet been consolidated. Harmonizing these concurrent efforts would enable the community as a whole to build the necessary tools and community standards to move forward together. Preliminary research indicate the various groups focused on this topic are working in isolation, and the development of standards needs to come from the broadest view of `community'. We will investigate the gaps between communities by collecting information about preservation policies and practices from curators, who can provide a diverse cross-section of the grand challenges to the overall community. We will look at existing reports and study results to identify example cases, then develop a survey to gather large scale data to reinforce or clarify the example cases. We will be targeting the various community groups which are working on similar issues, and use the survey to improve the visibility of developed best practices. Given that preservation and digital collection management for physical samples are both important and difficult at present (GMRWG, 2015; NRC, 2002), barriers to both need to be addressed in order to achieve open science goals for the entire community. To address these challenges, EarthCube's iSamples, a research coordination network established to advance discoverability, access, and curation of physical samples using cyberinfrastructure, has formed a working group to collect use cases to examine the breadth of earth scientists' work with physical samples. This research team includes curators of state survey and oceanographic geological collections, and a researcher from information science. In our presentation, we will share our research and the design of the proposed survey. Our goal is to engage the audience in a discussion on next steps towards building this community. References: The Geologic Materials Repository Working Group, 2015, USGS Circular 1410 National Research Council. 2002. Geoscience Data and Collections: National Resources in Peril.
Phylesystem: a git-based data store for community-curated phylogenetic estimates.
McTavish, Emily Jane; Hinchliff, Cody E; Allman, James F; Brown, Joseph W; Cranston, Karen A; Holder, Mark T; Rees, Jonathan A; Smith, Stephen A
2015-09-01
Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. As the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work and making errors introduced during curation easier to correct. Here, we report the development of software infrastructure to support the open curation of phylogenetic data by the community of biologists. The backend of the system provides an interface for the standard database operations of creating, reading, updating and deleting records by making commits to a git repository. The record of the history of edits to a tree is preserved by git's version control features. Hosting this data store on GitHub (http://github.com/) provides open access to the data store using tools familiar to many developers. We have deployed a server running the 'phylesystem-api', which wraps the interactions with git and GitHub. The Open Tree of Life project has also developed and deployed a JavaScript application that uses the phylesystem-api and other web services to enable input and curation of published phylogenetic statements. Source code for the web service layer is available at https://github.com/OpenTreeOfLife/phylesystem-api. The data store can be cloned from: https://github.com/OpenTreeOfLife/phylesystem. A web application that uses the phylesystem web services is deployed at http://tree.opentreeoflife.org/curator. Code for that tool is available from https://github.com/OpenTreeOfLife/opentree. mtholder@gmail.com. © The Author 2015. Published by Oxford University Press.
Iyappan, Anandhi; Kawalia, Shweta Bagewadi; Raschka, Tamara; Hofmann-Apitius, Martin; Senger, Philipp
2016-07-08
Neurodegenerative diseases are incurable and debilitating indications with huge social and economic impact, where much is still to be learnt about the underlying molecular events. Mechanistic disease models could offer a knowledge framework to help decipher the complex interactions that occur at molecular and cellular levels. This motivates the need for the development of an approach integrating highly curated and heterogeneous data into a disease model of different regulatory data layers. Although several disease models exist, they often do not consider the quality of underlying data. Moreover, even with the current advancements in semantic web technology, we still do not have cure for complex diseases like Alzheimer's disease. One of the key reasons accountable for this could be the increasing gap between generated data and the derived knowledge. In this paper, we describe an approach, called as NeuroRDF, to develop an integrative framework for modeling curated knowledge in the area of complex neurodegenerative diseases. The core of this strategy lies in the usage of well curated and context specific data for integration into one single semantic web-based framework, RDF. This increases the probability of the derived knowledge to be novel and reliable in a specific disease context. This infrastructure integrates highly curated data from databases (Bind, IntAct, etc.), literature (PubMed), and gene expression resources (such as GEO and ArrayExpress). We illustrate the effectiveness of our approach by asking real-world biomedical questions that link these resources to prioritize the plausible biomarker candidates. Among the 13 prioritized candidate genes, we identified MIF to be a potential emerging candidate due to its role as a pro-inflammatory cytokine. We additionally report on the effort and challenges faced during generation of such an indication-specific knowledge base comprising of curated and quality-controlled data. Although many alternative approaches have been proposed and practiced for modeling diseases, the semantic web technology is a flexible and well established solution for harmonized aggregation. The benefit of this work, to use high quality and context specific data, becomes apparent in speculating previously unattended biomarker candidates around a well-known mechanism, further leveraged for experimental investigations.
Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining
2010-01-01
Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist. PMID:20331846
The Papillomavirus Episteme: a major update to the papillomavirus sequence database.
Van Doorslaer, Koenraad; Li, Zhiwen; Xirasagar, Sandhya; Maes, Piet; Kaminsky, David; Liou, David; Sun, Qiang; Kaur, Ramandeep; Huyen, Yentram; McBride, Alison A
2017-01-04
The Papillomavirus Episteme (PaVE) is a database of curated papillomavirus genomic sequences, accompanied by web-based sequence analysis tools. This update describes the addition of major new features. The papillomavirus genomes within PaVE have been further annotated, and now includes the major spliced mRNA transcripts. Viral genes and transcripts can be visualized on both linear and circular genome browsers. Evolutionary relationships among PaVE reference protein sequences can be analysed using multiple sequence alignments and phylogenetic trees. To assist in viral discovery, PaVE offers a typing tool; a simplified algorithm to determine whether a newly sequenced virus is novel. PaVE also now contains an image library containing gross clinical and histopathological images of papillomavirus infected lesions. Database URL: https://pave.niaid.nih.gov/. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
A unified architecture for biomedical search engines based on semantic web technologies.
Jalali, Vahid; Matash Borujerdi, Mohammad Reza
2011-04-01
There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.
NASA Astrophysics Data System (ADS)
Perrier, Frédéric; Nsengiyumva, Jean-Baptiste
2003-09-01
Constructivist, hands-on, inquiry-based, science activities may have a curative potential that could be valuable in a psychological assistance programme for child victims of violence and war. To investigate this idea, pilot sessions were performed in an orphanage located in Ruhengeri, Rwanda, with seven young adults and two groups of 11 children aged from 9 to 16 years. Despite a number of imperfections in this attempt, significant observations have been made. First, a sound communication was established with all, even with the young adults who at the beginning were not as enthusiastic as the children. Furthermore, some children, originally isolated, silent and sad, displayed a high degree of happiness during the activities, and an overall increasing positive change of attitude. In addition, they appropriated well some principles of experimental science. This suggests that a joint development of science literacy and joy may be an interesting approach, both in education and therapy.
ElemeNT: a computational tool for detecting core promoter elements.
Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar
2015-01-01
Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, D.C.; Ringenberg, Q.S.; Yarbro, J.W.
Although cancer during pregnancy is infrequent, its management is difficult for patients, their families, and their physicians. When termination of the pregnancy is unacceptable, decisions regarding the use of irradiation and chemotherapy are complicated by the well-known high risks of abortion and fetal malformation. This risk is concentrated in the first trimester and varies with the choice of chemotherapeutic agents or combinations of agents. There is only minimal evidence of increased risk of malformation or abortion in the second or third trimester. Recent progress in cancer therapy has made cure a reasonable goal, and for some malignant neoplasms, cure ismore » still possible even when initial therapy is modified or delayed. When cure is a reasonable goal, curative therapy should not be compromised by modification or delay. When treatment for cure or significant palliation is not possible, however, the goal should shift to protection of the fetus from damage by the injudicious use of teratogenic cancer therapy. This report will review the available data that may assist in these difficult decisions. 114 references.« less
Tracing health system challenges in post-conflict Côte d'Ivoire from 1893 to 2013.
Gaber, Sabrina; Patel, Preeti
2013-07-01
While scholarship on health in conflict-affected countries is growing, there has been relatively little analysis of how armed conflict affects health systems in specific African countries, especially former French colonies. There is even less literature on the role of history in shaping health systems and how historical factors such as inequity may influence health impacts of armed conflict. Based on Côte d'Ivoire, this article argues that historical multidisciplinary analysis can provide valuable insight into the macro-level political, economic and social determinants of the health system over time. It explores how armed conflict has affected health services and exacerbates historically inherited challenges to the health system including unequal distribution of health services, bias towards curative care in urban areas, inadequate human resources and weak health governance. In the post-conflict period, this understanding may assist governments and other stakeholders to develop more appropriate health policies that address both urgent and long-term health needs.
Diplomatic Assistance: Can Helminth-Modulated Macrophages Act as Treatment for Inflammatory Disease?
Steinfelder, Svenja; O’Regan, Noëlle Louise; Hartmann, Susanne
2016-01-01
Helminths have evolved numerous pathways to prevent their expulsion or elimination from the host to ensure long-term survival. During infection, they target numerous host cells, including macrophages, to induce an alternatively activated phenotype, which aids elimination of infection, tissue repair, and wound healing. Multiple animal-based studies have demonstrated a significant reduction or complete reversal of disease by helminth infection, treatment with helminth products, or helminth-modulated macrophages in models of allergy, autoimmunity, and sepsis. Experimental studies of macrophage and helminth therapies are being translated into clinical benefits for patients undergoing transplantation and those with multiple sclerosis. Thus, helminths or helminth-modulated macrophages present great possibilities as therapeutic applications for inflammatory diseases in humans. Macrophage-based helminth therapies and the underlying mechanisms of their therapeutic or curative effects represent an under-researched area with the potential to open new avenues of treatment. This review explores the application of helminth-modulated macrophages as a new therapy for inflammatory diseases. PMID:27101372
McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta
2016-01-01
BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.
de Visscher, S A H J; Dijkstra, P U; Tan, I B; Roodenburg, J L N; Witjes, M J H
2013-03-01
Photodynamic therapy (PDT) is used in curative and palliative treatment of head and neck squamous cell carcinoma (HNSCC). To evaluate available evidence on the use of mTHPC (Foscan®) mediated PDT, we conducted a review of the literature. A systematic review was performed by searching seven bibliographic databases on database specific mesh terms and free text words in the categories; "head and neck neoplasms", "Photodynamic Therapy" and "Foscan". Papers identified were assessed on several criteria by two independent reviewers. The search identified 566 unique papers. Twelve studies were included for our review. Six studies reported PDT with curative intent and six studies reported PDT with palliative intent, of which three studies used interstitial PDT. The studies did not compare PDT to other treatments and none exceeded level 3 using the Oxford levels of evidence. Pooling of data (n=301) was possible for four of the six studies with curative intent. T1 tumors showed higher complete response rates compared to T2 (86% vs 63%). PDT with palliative intent was predominantly used in patients unsuitable for further conventional treatment. After PDT, substantial tumor response and increase in quality of life was observed. Complications of PDT were mostly related to non-compliance to light restriction guidelines. The studies on mTHPC mediated PDT for HNSCC are not sufficient for adequate assessment of the efficacy for curative intent. To assess efficacy of PDT with curative intent, high quality comparative, randomized studies are needed. Palliative treatment with PDT seems to increase the quality of life in otherwise untreatable patients. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Evans, Cindy; Todd, Nancy
2014-01-01
The Astromaterials Acquisition & Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. Today, the suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from Japan's Hayabusa mission, solar wind atoms collected during the Genesis mission, and space-exposed hardware from several missions. To support planetary science research on these samples, JSC's Astromaterials Curation Office hosts NASA's Astromaterials Curation digital repository and data access portal [http://curator.jsc.nasa.gov/], providing descriptions of the missions and collections, and critical information about each individual sample. Our office is designing and implementing several informatics initiatives to better serve the planetary research community. First, we are re-hosting the basic database framework by consolidating legacy databases for individual collections and providing a uniform access point for information (descriptions, imagery, classification) on all of our samples. Second, we continue to upgrade and host digital compendia that summarize and highlight published findings on the samples (e.g., lunar samples, meteorites from Mars). We host high resolution imagery of samples as it becomes available, including newly scanned images of historical prints from the Apollo missions. Finally we are creating plans to collect and provide new data, including 3D imagery, point cloud data, micro CT data, and external links to other data sets on selected samples. Together, these individual efforts will provide unprecedented digital access to NASA's Astromaterials, enabling preservation of the samples through more specific and targeted requests, and supporting new planetary science research and collaborations on the samples.
Gautham, Meenakshi; Binnendijk, Erika; Koren, Ruth; Dror, David M
2011-11-01
Against the backdrop of insufficient public supply of primary care and reports of informal providers, the present study sought to collect descriptive evidence on 1 st contact curative health care seeking choices among rural communities in two States of India - Andhra Pradesh (AP) and Orissa. The cross-sectional study design combined a Household Survey (1,810 households in AP; 5,342 in Orissa), 48 Focus Group Discussions (19 in AP; 29 in Orissa), and 61 Key Informant Interviews with healthcare providers (22 in AP; 39 in Orissa). In AP, 69.5 per cent of respondents accessed non-degree allopathic practitioners (NDAPs) practicing in or near their village; in Orissa, 40.2 per cent chose first curative contact with NDAPs and 36.2 per cent with traditional healers. In AP, all NDAPs were private practitioners, in Orissa some pharmacists and nurses employed in health facilities, also practiced privately. Respondents explained their choice by proximity and providers' readiness to make house-calls when needed. Less than a quarter of respondents chose qualified doctors as their first point of call: mostly private practitioners in AP, and public practitioners in Orissa. Amongst those who chose a qualified practitioner, the most frequent reason was doctors' quality rather than proximity. The results of this study show that most rural persons seek first level of curative healthcare close to home, and pay for a composite convenient service of consulting-cum-dispensing of medicines. NDAPs fill a huge demand for primary curative care which the public system does not satisfy, and are the de facto first level access in most cases.
Linder, Gustav; Sandin, Fredrik; Johansson, Jan; Lindblad, Mats; Lundell, Lars; Hedberg, Jakob
2018-02-01
Low socioeconomic status and poor education elevate the risk of developing esophageal- and junctional cancer. High education level also increases survival after curative surgery. The present study aimed to investigate associations, if any, between patient education-level and treatment allocation after diagnosis of esophageal- and junctional cancer and its subsequent impact on survival. A nation-wide cohort study was undertaken. Data from a Swedish national quality register for esophageal cancer (NREV) was linked to the National Cancer Register, National Patient Register, Prescribed Drug Register, Cause of Death Register and educational data from Statistics Sweden. The effect of education level (low; ≤9 years, intermediate; 10-12 years and high >12 years) on the probability of allocation to curative treatment was analyzed with logistic regression. The Kaplan-Meier-method and Cox proportional hazard models were used to assess the effect of education on survival. A total of 4112 patients were included. In a multivariate logistic regression model, high education level was associated with greater probability of allocation to curative treatment (adjusted OR: 1.48, 95% CI: 1.08-2.03, p = 0,014) as was adherence to a multidisciplinary treatment-conference (adjusted OR: 3.13, 95% CI: 2.40-4.08, p < 0,001). High education level was associated with improved survival in the patients allocated to curative treatment (HR: 0.82, 95% CI: 0.69-0.99, p = 0,036). In this nation-wide cohort of esophageal- and junctional cancer patients, including data regarding many confounders, high education level was associated with greater probability of being offered curative treatment and improved survival. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milano, Michael T.; Philip, Abraham; Okunieff, Paul
2009-03-01
Purpose: A subset of patients treated with curative-intent stereotactic radiotherapy (RT) for limited metastases (defined as five lesions or fewer) develop local failure and/or a small number of new lesions. We hypothesized that these patients would remain amenable to curative-intent treatment with additional RT courses. Methods and Materials: Of 121 prospective patients with five lesions or fewer treated with stereotactic RT, 32 underwent additional RT courses for local failure (n = 9) and/or new lesions (n = 29). Ten patients underwent three or more courses of RT. Results: The treated local failures developed a median of 20 months after RTmore » completion. For the new oligometastases, the interval between the first and second RT course was 1-71 months (median, 8). Of the 32 patients undergoing multiple courses of curative-intent RT, the 2-year overall survival and progression-free survival rate was 65% and 54%, respectively. The corresponding 4-year rates were 33% and 28%. Compared with the 89 patients who underwent one RT course, these patients experienced a trend toward improved overall survival (median, 32 vs. 21 months, p = 0.13) and significantly greater progression-free survival (median, 28 vs. 9 months, p = 0.008). Conclusion: The results of our study have shown that patients fare well with respect to survival and disease control with aggressive RT for limited metastases, even after local failure and/or the development of new metastases. Although patients amenable to multiple courses of curative-intent RT are arguably selected for more indolent disease, our hypothesis-generating analysis supports the notion of aggressively treating limited metastases, which, in some patients, might be curable and/or represent a chronic disease state.« less
Shindoh, Junichi; Tzeng, Ching-Wei D.; Aloia, Thomas A.; Curley, Steven A.; Zimmitti, Giuseppe; Wei, Steven H.; Huang, Steven Y.; Gupta, Sanjay; Wallace, Michael J.; Vauthey, Jean-Nicolas
2017-01-01
Background Most patients requiring an extended right hepatectomy (ERH) have an inadequate standardized future liver remnant (sFLR) and need preoperative portal vein embolization (PVE). However, the clinical and oncologic impact of PVE in such patients remains unclear. Methods All consecutive patients from MD Anderson Cancer Center with colorectal liver metastases (CLM) requiring ERH at presentation from 1995 through 2012. The surgical and oncologic outcomes were compared between patients with adequate and inadequate sFLRs at presentation. Results Of the 265 patients requiring ERH, 126 (47.5%) had an adequate sFLR at presentation, and 123 of them underwent curative resection. Of the 139 patients (52.5%) who had an inadequate sFLR and underwent PVE, 87 (62.6% PVE) underwent curative resection. Thus, PVE increased the curative resection rate from 123/265 (46.4%) at baseline to 210/265 (79.2%). Among patients who underwent ERH, rates of major complications and 90-day mortality were similar in the non-PVE and PVE groups (22.0% and 4.1% vs. 31% and 7%, respectively); overall survival (OS) and disease-free survival (DFS) were also similar in these 2 groups. Among patients with an inadequate sFLR at presentation, patients who underwent ERH had significantly better median OS (50.2 months) than patients who underwent noncurative surgery (21.3 months) or did not undergo surgery (24.7 months) (p=0.002). Conclusions PVE enables curative resection in two-thirds of patients with CLM who have an inadequate sFLR to tolerate ERH at presentation. Patients who undergo curative resection after PVE have OS and DFS equivalent to that of patients who never needed PVE. PMID:24227364
Qiao, Xiaojuan; Zhai, Xiaoran; Wang, Jinghui; Zhao, Xiaoting; Yang, Xinjie; Lv, Jialin; Ma, Li; Zhang, Lina; Wang, Yue; Zhang, Shucai; Yue, Wentao
2016-01-01
Matrix metalloproteinase 9 (MMP-9) plays an important role in tumor invasion and metastasis, including lung cancer. However, whether variations in serum MMP-9 levels can serve as a biomarker for monitoring chemotherapy curative effect remains unclear. This study was designed to investigate the association between variations in serum MMP-9 levels and chemotherapy curative effect in patients with lung cancer. A total of 82 patients with advanced lung cancer were included. All newly diagnosed patients were treated with platinum-based doublet chemotherapy. Serial measurements of serum MMP-9 levels were performed by enzyme-linked immunosorbent assay. In this manner, we chose four time points to examine the association, including before chemotherapy, and 3 weeks after the beginning of the first, second, and fourth cycles of chemotherapy. Compared with the serum level of MMP-9 before progressive disease, patients with progressive disease had elevated serum levels of MMP-9. Compared with the previous time point of collecting specimens, the serum levels of MMP-9 in the patients with a complete response/partial response/stable disease decreased or were maintained stable. The differences of variation in serum MMP-9 levels in patients with different chemotherapy curative effects were all statistically significant after one cycle, two cycles, and four cycles (after one cycle: P<0.001; after two cycles: P<0.001; after four cycles: P=0.01). However, patients with small-cell lung cancer did not exhibit similar test results. The variation in serum MMP-9 levels in patients with non-small-cell lung cancer during chemotherapy was closely related to chemotherapy curative effect and could be useful to monitor chemotherapy curative effect for a small portion of patients.
The baladi curative system of Cairo, Egypt.
Early, E A
1988-03-01
The article explores the symbolic structure of the baladi (traditional) cultural system as revealed in everyday narratives, with a focus on baladi curative action. The everyday illness narrative provides a cultural window to the principles of fluidity and restorative balance of baladi curative practices. The body is seen as a dynamic organism through which both foreign objects and physiological entities can move. The body should be in balance, as with any humorally-influenced system, and so baladi cures aim to restore normal balance and functioning of the body. The article examines in detail a narrative on treatment of a sick child, and another on treatment of fertility problems. It traces such cultural oppositions as insider: outsider; authentic:inauthentic; home remedy:cosmopolitan medicine. In the social as well as the medical arena these themes organize social/medical judgements about correct action and explanations of events.
An emerging role: the nurse content curator.
Brooks, Beth A
2015-01-01
A new phenomenon, the inverted or "flipped" classroom, assumes that students are no longer acquiring knowledge exclusively through textbooks or lectures. Instead, they are seeking out the vast amount of free information available to them online (the very essence of open source) to supplement learning gleaned in textbooks and lectures. With so much open-source content available to nursing faculty, it benefits the faculty to use readily available, technologically advanced content. The nurse content curator supports nursing faculty in its use of such content. Even more importantly, the highly paid, time-strapped faculty is not spending an inordinate amount of effort surfing for and evaluating content. The nurse content curator does that work, while the faculty uses its time more effectively to help students vet the truth, make meaning of the content, and learn to problem-solve. Brooks. © 2014 Wiley Periodicals, Inc.
The Curative and Prophylactic Effects of Xylopic Acid on Plasmodium berghei Infection in Mice
Boampong, J. N.; Ameyaw, E. O.; Aboagye, B.; Asare, K.; Kyei, S.; Donfack, J. H.; Woode, E.
2013-01-01
Efforts have been intensified to search for more effective antimalarial agents because of the observed failure of some artemisinin-based combination therapy (ACT) treatments of malaria in Ghana. Xylopic acid, a pure compound isolated from the fruits of the Xylopia aethiopica, was investigated to establish its attributable prophylactic, curative antimalarial, and antipyretic properties. The antimalarial properties were determined by employing xylopic acid (10–100 mg/kg) in ICR mice infected with Plasmodium berghei. Xylopic acid exerted significant (P < 0.05) effects on P. berghei infection similar to artemether/lumefantrine, the standard drug. Furthermore, it significantly (P < 0.05) reduced the lipopolysaccharide- (LPS-) induced fever in Sprague-Dawley rats similar to prednisolone. Xylopic acid therefore possesses prophylactic and curative antimalarial as well as antipyretic properties which makes it an ideal antimalarial agent. PMID:23970953
The IntAct molecular interaction database in 2012
Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning
2012-01-01
IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220
Apollo Lunar Sample Photographs: Digitizing the Moon Rock Collection
NASA Technical Reports Server (NTRS)
Lofgren, Gary E.; Todd, Nancy S.; Runco, S. K.; Stefanov, W. L.
2011-01-01
The Acquisition and Curation Office at JSC has undertaken a 4-year data restoration project effort for the lunar science community funded by the LASER program (Lunar Advanced Science and Exploration Research) to digitize photographs of the Apollo lunar rock samples and create high resolution digital images. These sample photographs are not easily accessible outside of JSC, and currently exist only on degradable film in the Curation Data Storage Facility
Goldweber, Scott; Theodore, Jamal; Torcivia-Rodriguez, John; Simonyan, Vahan; Mazumder, Raja
2017-01-01
Services such as Facebook, Amazon, and eBay were once solely accessed from stationary computers. These web services are now being used increasingly on mobile devices. We acknowledge this new reality by providing users a way to access publications and a curated cancer mutation database on their mobile device with daily automated updates. http://hive. biochemistry.gwu.edu/tools/HivePubcast.
Khajavi Rad, Abolfazl; Hadjzadeh, Mousa-Al-Reza; Rajaei, Ziba; Mohammadian, Nema; Valiollahi, Saleh; Sonei, Mehdi
2011-01-01
To assess the beneficial effect of different fractions of Cynodon dactylon (C. dactylon) on ethylene glycol-induced kidney calculi in rats. Male Wistar rats were randomly divided into control, ethylene glycol, curative, and preventive groups. The control group received tap drinking water for 35 days. Ethylene glycol, curative, and preventive groups received 1% ethylene glycol for induction of calcium oxalate (CaOx) calculus formation. Preventive and curative subjects also received different fractions of C. dactylon extract in drinking water at 12.8 mg/kg, since day 0 and day 14, respectively. After 35 days, the kidneys were removed and examined for histopathological findings and counting the CaOx deposits in 50 microscopic fields. In curative protocol, treatment of rats with C. dactylon N-butanol fraction and N-butanol phase remnant significantly reduced the number of the kidney CaOx deposits compared to ethylene glycol group. In preventive protocol, treatment of rats with C. dactylon ethyl acetate fraction significantly decreased the number of CaOx deposits compared to ethylene glycol group. Fractions of C. dactylon showed a beneficial effect on preventing and eliminating CaOx deposition in the rat kidney. These results provide a scientific rational for preventive and treatment roles of C. dactylon in human kidney stone disease.
Jayakrishnan, Thejus T; Nadeem, Hasan; Groeschl, Ryan T; George, Ben; Thomas, James P; Ritch, Paul S; Christians, Kathleen K; Tsai, Susan; Evans, Douglas B; Pappas, Sam G; Gamblin, T Clark; Turaga, Kiran K
2015-02-01
Laparoscopy is recommended to detect radiographically occult metastases in patients with pancreatic cancer before curative resection. This study was conducted to test the hypothesis that diagnostic laparoscopy (DL) is cost-effective in patients undergoing curative resection with or without neoadjuvant therapy (NAT). Decision tree modelling compared routine DL with exploratory laparotomy (ExLap) at the time of curative resection in resectable cancer treated with surgery first, (SF) and borderline resectable cancer treated with NAT. Costs (US$) from the payer's perspective, quality-adjusted life months (QALMs) and incremental cost-effectiveness ratios (ICERs) were calculated. Base case estimates and multi-way sensitivity analyses were performed. Willingness to pay (WtP) was US$4166/QALM (or US$50,000/quality-adjusted life year). Base case costs were US$34,921 for ExLap and US$33,442 for DL in SF patients, and US$39,633 for ExLap and US$39,713 for DL in NAT patients. Routine DL is the dominant (preferred) strategy in both treatment types: it allows for cost reductions of US$10,695/QALM in SF and US$4158/QALM in NAT patients. The present analysis supports the cost-effectiveness of routine DL before curative resection in pancreatic cancer patients treated with either SF or NAT. © 2014 International Hepato-Pancreato-Biliary Association.
Measuring the Value of Research Data: A Citation Analysis of Oceanographic Data Sets
Belter, Christopher W.
2014-01-01
Evaluation of scientific research is becoming increasingly reliant on publication-based bibliometric indicators, which may result in the devaluation of other scientific activities - such as data curation – that do not necessarily result in the production of scientific publications. This issue may undermine the movement to openly share and cite data sets in scientific publications because researchers are unlikely to devote the effort necessary to curate their research data if they are unlikely to receive credit for doing so. This analysis attempts to demonstrate the bibliometric impact of properly curated and openly accessible data sets by attempting to generate citation counts for three data sets archived at the National Oceanographic Data Center. My findings suggest that all three data sets are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years. I also find that methods of citing and referring to these data sets in scientific publications are highly inconsistent, despite the fact that a formal citation format is suggested for each data set. These findings have important implications for developing a data citation format, encouraging researchers to properly curate their research data, and evaluating the bibliometric impact of individuals and institutions. PMID:24671177
Ruusmann, Villu; Maran, Uko
2013-07-01
The scientific literature is important source of experimental and chemical structure data. Very often this data has been harvested into smaller or bigger data collections leaving the data quality and curation issues on shoulders of users. The current research presents a systematic and reproducible workflow for collecting series of data points from scientific literature and assembling a database that is suitable for the purposes of high quality modelling and decision support. The quality assurance aspect of the workflow is concerned with the curation of both chemical structures and associated toxicity values at (1) single data point level and (2) collection of data points level. The assembly of a database employs a novel "timeline" approach. The workflow is implemented as a software solution and its applicability is demonstrated on the example of the Tetrahymena pyriformis acute aquatic toxicity endpoint. A literature collection of 86 primary publications for T. pyriformis was found to contain 2,072 chemical compounds and 2,498 unique toxicity values, which divide into 2,440 numerical and 58 textual values. Every chemical compound was assigned to a preferred toxicity value. Examples for most common chemical and toxicological data curation scenarios are discussed.
VIOLIN: vaccine investigation and online information network.
Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun
2008-01-01
Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.
Curation accuracy of model organism databases
Keseler, Ingrid M.; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y.; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C.; Mladinich, Katherine M.; Chow, Edmond D.; Sherlock, Gavin; Karp, Peter D.
2014-01-01
Manual extraction of information from the biomedical literature—or biocuration—is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819
Risk factors of early recurrence after curative hepatectomy in hepatocellular carcinoma.
Hong, Young Mi; Cho, Mong; Yoon, Ki Tae; Chu, Chong Woo; Yang, Kwang Ho; Park, Yong Mok; Rhu, Je Ho
2017-10-01
Early recurrence is common after curative hepatectomy for hepatocellular carcinoma and is associated with poor prognosis. This study aimed to identify risk factors of early recurrence after curative hepatectomy in hepatocellular carcinoma. Overall, 63 patients who underwent curative hepatectomy for hepatocellular carcinoma were enrolled. Patients were divided into the early recurrence group, who developed recurrence within 12 months after hepatectomy (n = 10), and the non-early recurrence group (n = 53). Clinicopathological factors of early recurrence were retrospectively analyzed. Among the 63 patients, 10 (15.9%) patients experienced early recurrence. Univariate analysis showed tumor necrosis (p = 0.012), level of PIVKA-II (prothrombin induced by vitamin K absence or antagonist-II; p = 0.002), and microvascular invasion (p = 0.029) to be associated with early recurrence. By multivariate analysis, there were significant differences in high PIVKA-II (p < 0.001) and tumor necrosis (p = 0.012) in patients with early recurrence. The optimal cutoff values of PIVKA-II and tumor necrosis were 46 mAU/mL and 3% of total tumor volume, respectively. Patients with a high preoperative PIVKA-II level and extent of tumor necrosis, which are independent risk factors for early recurrence, should be actively treated and monitored closely after hepatectomy.
Measuring the value of research data: a citation analysis of oceanographic data sets.
Belter, Christopher W
2014-01-01
Evaluation of scientific research is becoming increasingly reliant on publication-based bibliometric indicators, which may result in the devaluation of other scientific activities--such as data curation--that do not necessarily result in the production of scientific publications. This issue may undermine the movement to openly share and cite data sets in scientific publications because researchers are unlikely to devote the effort necessary to curate their research data if they are unlikely to receive credit for doing so. This analysis attempts to demonstrate the bibliometric impact of properly curated and openly accessible data sets by attempting to generate citation counts for three data sets archived at the National Oceanographic Data Center. My findings suggest that all three data sets are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years. I also find that methods of citing and referring to these data sets in scientific publications are highly inconsistent, despite the fact that a formal citation format is suggested for each data set. These findings have important implications for developing a data citation format, encouraging researchers to properly curate their research data, and evaluating the bibliometric impact of individuals and institutions.
Klevebro, Fredrik; Ekman, Simon; Nilsson, Magnus
2017-09-01
Multimodality treatment has now been widely introduced in the curatively intended treatment of esophageal and gastroesophageal junction cancer. We aim to give an overview of the scientific evidence for the available treatment strategies and to describe which trends that are currently developing. We conducted a review of the scientific evidence for the different curatively intended treatment strategies that are available today. Relevant articles of randomized controlled trials, cohort studies, and meta analyses were included. After a systematic search of relevant papers we have included 64 articles in the review. The results show that adenocarcinomas and squamous cell carcinomas of the esophagus and gastroesophageal junction are two separate entities and should be analysed and studied as two different diseases. Neoadjuvant treatment followed by surgical resection is the gold standard of the curatively intended treatment today. There is no scientific evidence to support the use of chemoradiotherapy over chemotherapy in the neoadjuvant setting for esophageal or junctional adenocarcinoma. There is reasonable evidence to support definitive chemoradiotherapy as a treatment option for squamous cell carcinoma of the esophagus. The evidence base for curatively intended treatments of esophageal and gastroesophageal junction cancer is not very strong. Several on-going trials have the potential to change the gold standard treatments of today. Copyright © 2017 Elsevier Ltd. All rights reserved.
The BioGRID interaction database: 2013 update.
Chatr-Aryamontri, Andrew; Breitkreutz, Bobby-Joe; Heinicke, Sven; Boucher, Lorrie; Winter, Andrew; Stark, Chris; Nixon, Julie; Ramage, Lindsay; Kolas, Nadine; O'Donnell, Lara; Reguly, Teresa; Breitkreutz, Ashton; Sellam, Adnane; Chen, Daici; Chang, Christie; Rust, Jennifer; Livstone, Michael; Oughtred, Rose; Dolinski, Kara; Tyers, Mike
2013-01-01
The Biological General Repository for Interaction Datasets (BioGRID: http//thebiogrid.org) is an open access archive of genetic and protein interactions that are curated from the primary biomedical literature for all major model organism species. As of September 2012, BioGRID houses more than 500 000 manually annotated interactions from more than 30 model organisms. BioGRID maintains complete curation coverage of the literature for the budding yeast Saccharomyces cerevisiae, the fission yeast Schizosaccharomyces pombe and the model plant Arabidopsis thaliana. A number of themed curation projects in areas of biomedical importance are also supported. BioGRID has established collaborations and/or shares data records for the annotation of interactions and phenotypes with most major model organism databases, including Saccharomyces Genome Database, PomBase, WormBase, FlyBase and The Arabidopsis Information Resource. BioGRID also actively engages with the text-mining community to benchmark and deploy automated tools to expedite curation workflows. BioGRID data are freely accessible through both a user-defined interactive interface and in batch downloads in a wide variety of formats, including PSI-MI2.5 and tab-delimited files. BioGRID records can also be interrogated and analyzed with a series of new bioinformatics tools, which include a post-translational modification viewer, a graphical viewer, a REST service and a Cytoscape plugin.
NASA Astrophysics Data System (ADS)
Stevens, T.
2016-12-01
NASA's Global Change Master Directory (GCMD) curates a hierarchical set of controlled vocabularies (keywords) covering Earth sciences and associated information (data centers, projects, platforms, and instruments). The purpose of the keywords is to describe Earth science data and services in a consistent and comprehensive manner, allowing for precise metadata search and subsequent retrieval of data and services. The keywords are accessible in a standardized SKOS/RDF/OWL representation and are used as an authoritative taxonomy, as a source for developing ontologies, and to search and access Earth Science data within online metadata catalogs. The keyword curation approach involves: (1) receiving community suggestions; (2) triaging community suggestions; (3) evaluating keywords against a set of criteria coordinated by the NASA Earth Science Data and Information System (ESDIS) Standards Office; (4) implementing the keywords; and (5) publication/notification of keyword changes. This approach emphasizes community input, which helps ensure a high quality, normalized, and relevant keyword structure that will evolve with users' changing needs. The Keyword Community Forum, which promotes a responsive, open, and transparent process, is an area where users can discuss keyword topics and make suggestions for new keywords. Others could potentially use this formalized approach as a model for keyword curation.
Plant Reactome: a resource for plant pathways and comparative analysis.
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D; Wu, Guanming; Fabregat, Antonio; Elser, Justin L; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D; Ware, Doreen; Jaiswal, Pankaj
2017-01-04
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ning, Shangwei; Zhang, Jizhou; Wang, Peng; Zhi, Hui; Wang, Jianjian; Liu, Yue; Gao, Yue; Guo, Maoni; Yue, Ming; Wang, Lihua; Li, Xia
2016-01-01
Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource. PMID:26481356
Dimitrov, Dobromir T; Kiem, Hans-Peter; Jerome, Keith R; Johnston, Christine; Schiffer, Joshua T
2016-02-24
HIV curative strategies currently under development aim to eradicate latent provirus, or prevent viral replication, progression to AIDS, and transmission. The impact of implementing curative programs on HIV epidemics has not been considered. We developed a mathematical model of heterosexual HIV transmission to evaluate the independent and synergistic impact of ART, HIV prevention interventions and cure on HIV prevalence and incidence. The basic reproduction number was calculated to study the potential for the epidemic to be eliminated. We explored scenarios with and without the assumption that patients enrolled into HIV cure programs need to be on antiretroviral treatment (ART). In our simulations, curative regimes had limited impact on HIV incidence if only ART patients were eligible for cure. Cure implementation had a significant impact on HIV incidence if ART-untreated patients were enrolled directly into cure programs. Concurrent HIV prevention programs moderately decreased the percent of ART treated or cured patients needed to achieve elimination. We project that widespread implementation of HIV cure would decrease HIV prevalence under all scenarios but would only lower rate of new infections if ART-untreated patients were targeted. Current efforts to identify untreated HIV patients will gain even further relevance upon availability of an HIV cure.
Grigorie, Răzvan; Alexandrescu, Sorin; Smira, Gabriela; Ionescu, Mihnea; Hrehoreţ, Doina; Braşoveanu, Vladislav; Dima, Simona; Ciurea, Silviu; Boeţi, Patricia; Dudus, Ionut; Picu, Nausica; Zamfir, Radu; David, Leonard; Botea, Florin; Gheorghe, Liana; Tomescu, Dana; Lupescu, Ioana; Boroş, Mirela; Grasu, Mugur; Dumitru, Radu; Toma, Mihai; Croitoru, Adina; Herlea, Vlad; Pechianu, Cătălin; Năstase, Anca; Popescu, Irinel
2017-01-01
Background: The objective of this study is to assess the outcome of the patients treated for hepatocellular carcinoma (HCC) in a General Surgery and Liver Transplantation Center. Methods: This retrospective study includes 844 patients diagnosed with HCC and surgically treated with curative intent methods. Curative intent treatment is mainly based on surgery, consisting of liver resection (LR), liver transplantation (LT). Tumor ablation could become the choice of treatment in HCC cases not manageable for surgery (LT or LR). 518 patients underwent LR, 162 patients benefited from LT and in 164 patients radiofrequency ablation (RFA) was performed. 615 patients (73%) presented liver cirrhosis. Results: Mordidity rates of patient treated for HCC was 30% and mortality was 4,3% for the entire study population. Five year overall survival rate was 39 % with statistically significant differences between transplanted, resected, or ablated patients (p 0.05) with better results in case of LT followed by LR and RFA. Conclusions: In HCC patients without liver cirrhosis, liver resection is the treatment of choice. For early HCC occurred on cirrhosis, LT offers the best outcome in terms of overall and disease free survival. RFA colud be a curative method for HCC patients not amenable for LT of LR. Celsius.
Is an aggressive surgical approach to the patient with gastric lymphoma warranted?
Rosen, C B; van Heerden, J A; Martin, J K; Wold, L E; Ilstrup, D M
1987-01-01
At the Mayo Clinic, from 1970 through 1979, 84 patients (52 males and 32 females) had abdominal exploration for primary gastric lymphoma. All patients were observed a minimum of 5 years or until death. The histologic findings for all 84 patients were reviewed. Forty-four patients had "curative resection," and 40 patients had either biopsy alone or a palliative procedure. The probability of surviving 5 years was 75% for patients after potentially curative resection and 32% for patients after biopsy and palliation (p less than 0.001). The operative mortality rate was 5% overall and 2% after potentially curative resection. Increased tumor size (p less than 0.02), increased tumor penetration (p less than 0.01), and lymph node involvement (p less than 0.02) decreased the probability of survival, whereas histologic classification did not affect survival. Radiation therapy after surgery did not significantly affect the survival rate for the entire group or the survival rate for patients who had potentially curative resection. Resectability was associated with increased patient survival--independent of other prognostic factors--when our experience was analyzed by the Cox proportional-hazards model (p less than 0.005). It was concluded that an aggressive surgical attitude in the treatment of primary gastric lymphoma is warranted. The role of radiotherapy remains in question. PMID:3592805
Llorens, Eugenio; Agustí-Brisach, Carlos; González-Hernández, Ana I; Troncho, Pilar; Vicedo, Begonya; Yuste, Teresa; Orero, Mayte; Ledó, Carlos; García-Agustín, Pilar; Lapeña, Leonor
2017-05-01
Developments of alternatives to the use of chemical pesticides to control pests are focused on the induction of natural plant defences. The study of new compounds based on liquid bioassimilable sulphur and its effect as an inductor of the immune system of plants would provide an alternative option to farmers to enhance plant resistance against pathogen attacks such as powdery mildew. In order to elucidate the efficacy of this compound in tomato against powdery mildew, we tested several treatments: curative foliar, preventive foliar, preventive in soil drench and combining preventive in soil drench and curative foliar. In all cases, treated plants showed lower infection development, better physiological parameters and a higher level of chlorophyll. We also observed better performance in parameters involved in plant resistance such as antioxidant response, callose deposition and hormonal levels. The results indicate that preventive and curative treatments can be highly effective for the prevention and control of powdery mildew in tomato plants. Foliar treatments are able to stop the pathogen development when they are applied as curative. Soil drench treatments induce immune response mechanisms of plants, increasing significantly callose deposition and promoting plant development. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Recent advances in multidisciplinary management of hepatocellular carcinoma
Gomaa, Asmaa I; Waked, Imam
2015-01-01
The incidence of hepatocellular carcinoma (HCC) is increasing, and it is currently the second leading cause of cancer-related death worldwide. Potentially curative treatment options for HCC include resection, transplantation, and percutaneous ablation, whereas palliative treatments include trans-arterial chemoembolization (TACE), radioembolization, and systemic treatments. Due to the diversity of available treatment options and patients’ presentations, a multidisciplinary team should decide clinical management of HCC, according to tumor characteristics and stage of liver disease. Potentially curative treatments are suitable for very-early- and early-stage HCC. However, the vast majority of HCC patients are diagnosed in later stages, where the tumor characteristics or progress of liver disease prevent curative interventions. For patients with intermediate-stage HCC, TACE and radioembolization improve survival and are being evaluated in addition to potentially curative therapies or with systemic targeted therapy. There is currently no effective systemic chemotherapy, immunologic, or hormonal therapy for HCC, and sorafenib is the only approved molecular-targeted treatment for advanced HCC. Other targeted agents are under investigation; trials comparing new agents in combination with sorafenib are ongoing. Combinations of systemic targeted therapies with local treatments are being evaluated for further improvements in HCC patient outcomes. This article provides an updated and comprehensive overview of the current standards and trends in the treatment of HCC. PMID:25866604
Lovell, Peter V; Huizinga, Nicole A; Getachew, Abel; Mees, Brianna; Friedrich, Samantha R; Wirthlin, Morgan; Mello, Claudio V
2018-05-18
Zebra finches are a major model organism for investigating mechanisms of vocal learning, a trait that enables spoken language in humans. The development of cDNA collections with expressed sequence tags (ESTs) and microarrays has allowed for extensive molecular characterizations of circuitry underlying vocal learning and production. However, poor database curation can lead to errors in transcriptome and bioinformatics analyses, limiting the impact of these resources. Here we used genomic alignments and synteny analysis for orthology verification to curate and reannotate ~ 35% of the oligonucleotides and corresponding ESTs/cDNAs that make-up Agilent microarrays for gene expression analysis in finches. We found that: (1) 5475 out of 43,084 oligos (a) failed to align to the zebra finch genome, (b) aligned to multiple loci, or (c) aligned to Chr_un only, and thus need to be flagged until a better genome assembly is available, or (d) reflect cloning artifacts; (2) Out of 9635 valid oligos examined further, 3120 were incorrectly named, including 1533 with no known orthologs; and (3) 2635 oligos required name update. The resulting curated dataset provides a reference for correcting gene identification errors in previous finch microarrays studies, and avoiding such errors in future studies.
Clinical study on the treatment of vertigo by ant vertigo
NASA Astrophysics Data System (ADS)
Liu, Xiaobin; Li, Chongxian; Hao, Shaojun; Lian, Linlin; Chen, Weiliang; Wang, Hongyu; Guan, Zhijiang; Zhang, Zhengchen
2018-04-01
To observe the clinical curative effect of antiglare granule in the treatment of hypertension, cerebral arteriosclerosis, vertebrobasilar artery insufficiency, Meniere's disease, autonomic dysfunction caused by vertigo etc, the patients with vertigo were randomly divided into 300 cases of cerebral arteriosclerosis, vertebral basilar artery insufficiency, Meniere's disease into three groups, treatment group: control group 1, 2 groups of. 3 times a day, 30 days for a course of treatment, once a two treatment, observation and treatment effect. Control group: conventional doses of Yangxue Qingnao Granule, enteric coated aspirin treatment ibid. After 2 courses of treatment were observed and recorded the key concept of vertigo degree change number. Compare the outcome of TCM symptom medication after February, the total effective rate of treatment group was 96%, 1 in the control group the total efficiency of 69.7%, 2 in the control group the total efficiency of 71.7%, the treatment group curative effect on the treatment of hypertension, cerebral arteriosclerosis, vertebral basilar artery insufficiency vertigo, Meniere's disease, head weight light, walking foot stable curative effect is better than that of Yangxue Qingnao Granule, enteric coated aspirin effect. Aanti glare granule in the treatment of hypertension, cerebral arteriosclerosis and vertebral basilar artery insufficiency, Meniere's disease, autonomic dysfunction caused by vertigo has good clinical curative effect.
NASA Astrophysics Data System (ADS)
Elger, Kirsten; Ulbricht, Damian; Bertelmann, Roland
2017-04-01
Open access to research data is an increasing international request and includes not only data underlying scholarly publication, but also raw and curated data. Especially in the framework of the observed shift in many scientific fields towards data science and data mining, data repositories are becoming important player as data archives and access point to curated research data. While general and institutional data repositories are available across all scientific disciplines, domain-specific data repositories are specialised for scientific disciplines, like, e.g., bio- or geosciences, with the possibility to use more discipline-specific and richer metadata models than general repositories. Data publication is increasingly regarded as important scientific achievement, and datasets with digital object identifier (DOI) are now fully citable in journal articles. Moreover, following in their signature of the "Statement of Commitment of the Coalition on Publishing Data in the Earth and Space Sciences" (COPDESS), many publishers have adopted their data policies and recommend and even request to store and publish data underlying scholarly publications in (domain-specific) data repositories and not as classical supplementary material directly attached to the respective article. The curation of large dynamic data from global networks in, e.g., seismology, magnetics or geodesy, always required a high grade of professional, IT-supported data management, simply to be able to store and access the huge number of files and manage dynamic datasets. In contrast to these, the vast amount of research data acquired by individual investigators or small teams known as 'long-tail data' was often not the focus for the development of data curation infrastructures. Nevertheless, even though they are small in size and highly variable, in total they represent a significant portion of the total scientific outcome. The curation of long-tail data requires more individual approaches and personal involvement of the data curator, especially regarding the data description. Here we will introduce best practices for the publication of long-tail data that are helping to reduce the individual effort, improve the quality of the data description. The data repository of GFZ Data Services, which is hosted at GFZ German Research Centre for Geosciences in Potsdam, is a domain-specific data repository for geosciences. In addition to large dynamic datasets from different disciplines, it has a large focus on the DOI-referenced publication of long-tail data with the aim to reach a high grade of reusability through a comprehensive data description and in the same time provide and distribute standardised, machine actionable metadata for data discovery (FAIR data). The development of templates for data reports, metadata provision by scientists via an XML Metadata Editor and discipline-specific DOI landing pages are helping both, the data curators to handle all kinds of datasets and enabling the scientists, i.e. user, to quickly decide whether a published dataset is fulfilling their needs. In addition, GFZ Data Services have developed DOI-registration services for several international networks (e.g. ICGEM, World Stress Map, IGETS, etc.). In addition, we have developed project-or network-specific designs of the DOI landing pages with the logo or design of the networks or project
Synthesis and Screening of New Antimalarial Drugs
1987-10-30
correlate well with the known pIharmacokinetics of thie drug. 3. fhe blood schizonticidal properties of chloroquine (active at 3 mg/kg/day x 7 days) were...Reference drug chloroquine has shown consistently curative action at 3 mg/kg (base) x 7 days. No escalation of chloroquine curative dose has been...from patent infection has been used from time to time for standardization of blood schizontocidal test using chloroquine diphosphate as the reference
Primary leiomyosarcoma of the innominate vein.
Illuminati, Giulio; Miraldi, Fabio; Mazzesi, Giuseppe; D'urso, Antonio; Ceccanei, Gianluca; Bezzi, Marcello
2007-01-01
Primary venous leiomyosarcoma is rare. We report the case of a primary leiomyosarcoma of the left innominate vein, with neoplastic thrombus extending into the left jugular and subclavian veins. The tumor was curatively resected en bloc with anterior mediastinal and laterocervical lymphatics, through a median sternotomy prolonged into left cervicotomy. Primary venous sarcomas may be associated with prolonged survival in individual cases, with curative resection recommended as the standard treatment, in the absence of distant spread.
Regulation and Function of Cytokines that Predict Prostate Cancer Metastasis
2012-08-01
one given the potential for attempts at local curative therapy (whether it be surgery, radiation or cryotherapy ) to subject the patient to both short...from the prostate cancer [2]. Next, following local curative therapy the issue of requirement and timing for second line adjuvant therapy becomes...with locally advanced disease. W81XWH-09-1-0503 Bhowmick, Neil A. 6 f. References [1] Jemal A, Tiwari RC, Murray T, Ghafoor
[Pain as a stimulator of protective and curative processes (the theory of pain)].
Uglov, F G; Kopylov, V A
1985-06-01
The work deals with the theoretical approach to problem of pain. The mechanism of the appearance of pain is considered as lack of correspondence between functional capacity of the nervous system and the presented load. The function of pain as reparator and stimulator of defensive forces of the organism in pathological processes is disclosed. The possible employment of pain as a curative factor in practical medicine is discussed.
Sharing and community curation of mass spectrometry data with GNPS
Nguyen, Don Duy; Watrous, Jeramie; Kapono, Clifford A; Luzzatto-Knaan, Tal; Porto, Carla; Bouslimani, Amina; Melnik, Alexey V; Meehan, Michael J; Liu, Wei-Ting; Crüsemann, Max; Boudreau, Paul D; Esquenazi, Eduardo; Sandoval-Calderón, Mario; Kersten, Roland D; Pace, Laura A; Quinn, Robert A; Duncan, Katherine R; Hsu, Cheng-Chih; Floros, Dimitrios J; Gavilan, Ronnie G; Kleigrewe, Karin; Northen, Trent; Dutton, Rachel J; Parrot, Delphine; Carlson, Erin E; Aigle, Bertrand; Michelsen, Charlotte F; Jelsbak, Lars; Sohlenkamp, Christian; Pevzner, Pavel; Edlund, Anna; McLean, Jeffrey; Piel, Jörn; Murphy, Brian T; Gerwick, Lena; Liaw, Chih-Chuang; Yang, Yu-Liang; Humpf, Hans-Ulrich; Maansson, Maria; Keyzers, Robert A; Sims, Amy C; Johnson, Andrew R.; Sidebottom, Ashley M; Sedio, Brian E; Klitgaard, Andreas; Larson, Charles B; P., Cristopher A Boya; Torres-Mendoza, Daniel; Gonzalez, David J; Silva, Denise B; Marques, Lucas M; Demarque, Daniel P; Pociute, Egle; O'Neill, Ellis C; Briand, Enora; Helfrich, Eric J. N.; Granatosky, Eve A; Glukhov, Evgenia; Ryffel, Florian; Houson, Hailey; Mohimani, Hosein; Kharbush, Jenan J; Zeng, Yi; Vorholt, Julia A; Kurita, Kenji L; Charusanti, Pep; McPhail, Kerry L; Nielsen, Kristian Fog; Vuong, Lisa; Elfeki, Maryam; Traxler, Matthew F; Engene, Niclas; Koyama, Nobuhiro; Vining, Oliver B; Baric, Ralph; Silva, Ricardo R; Mascuch, Samantha J; Tomasi, Sophie; Jenkins, Stefan; Macherla, Venkat; Hoffman, Thomas; Agarwal, Vinayak; Williams, Philip G; Dai, Jingqui; Neupane, Ram; Gurr, Joshua; Rodríguez, Andrés M. C.; Lamsa, Anne; Zhang, Chen; Dorrestein, Kathleen; Duggan, Brendan M; Almaliti, Jehad; Allard, Pierre-Marie; Phapale, Prasad; Nothias, Louis-Felix; Alexandrov, Theodore; Litaudon, Marc; Wolfender, Jean-Luc; Kyle, Jennifer E; Metz, Thomas O; Peryea, Tyler; Nguyen, Dac-Trung; VanLeer, Danielle; Shinn, Paul; Jadhav, Ajit; Müller, Rolf; Waters, Katrina M; Shi, Wenyuan; Liu, Xueting; Zhang, Lixin; Knight, Rob; Jensen, Paul R; Palsson, Bernhard O; Pogliano, Kit; Linington, Roger G; Gutiérrez, Marcelino; Lopes, Norberto P; Gerwick, William H; Moore, Bradley S; Dorrestein, Pieter C; Bandeira, Nuno
2017-01-01
The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry techniques are well-suited to high-throughput characterization of natural products, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social molecular networking (GNPS, http://gnps.ucsd.edu), an open-access knowledge base for community wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of ‘living data’ through continuous reanalysis of deposited data. PMID:27504778
Sun, Da-Xin; Tan, Xiao-Dong; Gao, Feng; Xu, Jin; Cui, Dong-Xu; Dai, Xian-Wei
2015-01-01
Postoperative bile leak is a major surgical morbidity after curative resection with hepaticojejunostomy for hilar cholangiocarcinoma, especially in Bismuth-Corlette types III and IV. This retrospective study assessed the effectiveness and safety of an autologous hepatic round ligament flap (AHRLF) for reducing bile leak after hilar hepaticojejunostomy. Nine type III and IV hilar cholangiocarcinoma patients were consecutively hospitalized for elective perihilar partial hepatectomy with hilar hepaticojejunostomy using an AHRLF between October 2009 and September 2013. The AHRLF was harvested to reinforce the perihilar hepaticojejunostomy. Main outcome measures included operative time, blood loss, postoperative recovery times, morbidity, bile leak, R0 resection rate, and overall survival. All patients underwent uneventful R0 resection with hilar hepaticojejunostomy. No patient experienced postoperative bile leak. The AHRLF was associated with lack of bile leak after curative perihilar hepatectomy with hepaticojejunostomy for hilar cholangiocarcinoma, without compromising oncologic safety, and is recommended in selected patients.
[Diagnostic and curative bronchoscopy for purulent-destructive pulmonary diseases].
Pinchuk, T P; Yasnogorodsky, O O; Guryanova, Yu V; Taldykin, M V; Kachikin, A S; Catane, Yu A
To assess an efficacy of diagnostic and curative bronchoscopy in patients with purulent-destructive pulmonary diseases. Diagnosis and treatment of 34 patients with purulent-destructive pulmonary diseases including small-focal destruction (14) and lung abscesses (19) were analyzed. 33 patients underwent diagnostic fibrobronchoscopy (FBS) with brush and transbronchial biopsy. Curative endoscopy included bronchial tree sanation, peribronchial administration of antibiotics (5) and transbronchial drainage of abscess (14). Atrophic bronchitis and cicatricial deformity of the 2-3rd segmental bronchi were revealed in 81.8% and 15.2% respectively. Transbronchial biopsy confirmed malignant neoplasms (15.2%) and pulmonary tuberculosis (6.1%). Peribronchial administration of amikacin in patients with small-focal pulmonary destruction and transbronchial drainage of abscesses accelerated pulmonary tissue repair and complete recovery. Transbronchial biopsy in patients with destructive pulmonary diseases verifies pathological process and excludes malignant and specific pulmonary damage. Complex use of endoscopic methods is associated with positive clinical result in all patients with pulmonary destruction.
Wang, Mingxun; Carver, Jeremy J; Phelan, Vanessa V; Sanchez, Laura M; Garg, Neha; Peng, Yao; Nguyen, Don Duy; Watrous, Jeramie; Kapono, Clifford A; Luzzatto-Knaan, Tal; Porto, Carla; Bouslimani, Amina; Melnik, Alexey V; Meehan, Michael J; Liu, Wei-Ting; Crüsemann, Max; Boudreau, Paul D; Esquenazi, Eduardo; Sandoval-Calderón, Mario; Kersten, Roland D; Pace, Laura A; Quinn, Robert A; Duncan, Katherine R; Hsu, Cheng-Chih; Floros, Dimitrios J; Gavilan, Ronnie G; Kleigrewe, Karin; Northen, Trent; Dutton, Rachel J; Parrot, Delphine; Carlson, Erin E; Aigle, Bertrand; Michelsen, Charlotte F; Jelsbak, Lars; Sohlenkamp, Christian; Pevzner, Pavel; Edlund, Anna; McLean, Jeffrey; Piel, Jörn; Murphy, Brian T; Gerwick, Lena; Liaw, Chih-Chuang; Yang, Yu-Liang; Humpf, Hans-Ulrich; Maansson, Maria; Keyzers, Robert A; Sims, Amy C; Johnson, Andrew R; Sidebottom, Ashley M; Sedio, Brian E; Klitgaard, Andreas; Larson, Charles B; P, Cristopher A Boya; Torres-Mendoza, Daniel; Gonzalez, David J; Silva, Denise B; Marques, Lucas M; Demarque, Daniel P; Pociute, Egle; O'Neill, Ellis C; Briand, Enora; Helfrich, Eric J N; Granatosky, Eve A; Glukhov, Evgenia; Ryffel, Florian; Houson, Hailey; Mohimani, Hosein; Kharbush, Jenan J; Zeng, Yi; Vorholt, Julia A; Kurita, Kenji L; Charusanti, Pep; McPhail, Kerry L; Nielsen, Kristian Fog; Vuong, Lisa; Elfeki, Maryam; Traxler, Matthew F; Engene, Niclas; Koyama, Nobuhiro; Vining, Oliver B; Baric, Ralph; Silva, Ricardo R; Mascuch, Samantha J; Tomasi, Sophie; Jenkins, Stefan; Macherla, Venkat; Hoffman, Thomas; Agarwal, Vinayak; Williams, Philip G; Dai, Jingqui; Neupane, Ram; Gurr, Joshua; Rodríguez, Andrés M C; Lamsa, Anne; Zhang, Chen; Dorrestein, Kathleen; Duggan, Brendan M; Almaliti, Jehad; Allard, Pierre-Marie; Phapale, Prasad; Nothias, Louis-Felix; Alexandrov, Theodore; Litaudon, Marc; Wolfender, Jean-Luc; Kyle, Jennifer E; Metz, Thomas O; Peryea, Tyler; Nguyen, Dac-Trung; VanLeer, Danielle; Shinn, Paul; Jadhav, Ajit; Müller, Rolf; Waters, Katrina M; Shi, Wenyuan; Liu, Xueting; Zhang, Lixin; Knight, Rob; Jensen, Paul R; Palsson, Bernhard O; Pogliano, Kit; Linington, Roger G; Gutiérrez, Marcelino; Lopes, Norberto P; Gerwick, William H; Moore, Bradley S; Dorrestein, Pieter C; Bandeira, Nuno
2016-08-09
The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry (MS) techniques are well-suited to high-throughput characterization of NP, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social Molecular Networking (GNPS; http://gnps.ucsd.edu), an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of 'living data' through continuous reanalysis of deposited data.
Public health and primary care: struggling to "win friends and influence people".
Mayes, Rick; McKenna, Sean
2011-01-01
Why are the goals of public health and primary care less politically popular and financially supported than those of curative medicine? A major part of the answer to this question lies in the fact that humans often worry wrongly by assessing risk poorly. This reality is a significant obstacle to the adequate promotion of and investment in public health, primary care, and prevention. Also, public health's tendency to infringe on personal privacy-as well as to call for difficult behavioral change-often sparks intense controversy and interest group opposition that discourage broader political support. Finally, in contrast to curative medicine, both the cost-benefit structure of public health (costs now, benefits later) and the way in which the profession operates make it largely invisible to and, thus, underappreciated by the general public. When curative medicine works well, most everybody notices. When public health and primary care work well, virtually nobody notices.
El Bacha, H; Salihoun, M; Kabbaj, N; Benkabbou, A
2017-01-04
Hepatocellular carcinoma has a poor prognosis; few patients can undergo surgical curative treatment according to Barcelona Clinic Liver Cancer guidelines. Progress in surgical techniques has led to operations for more patients outside these guidelines. Our case shows a patient with intermediate stage hepatocellular carcinoma presenting a good outcome after curative treatment. We report the case of an 80-year-old Moroccan man, who was positive for hepatitis c virus, presenting an intermediate stage hepatocellular carcinoma (three lesions between 20 and 60 mm). He presented a complete tumor necrosis after portal vein embolization and achieved 24-month disease-free survival after surgery. Perioperative care in liver surgery and multidisciplinary discussion can help to extend indications for liver resection for hepatocellular carcinoma outside European Association for the Study of the Liver/American Association for the Study of Liver Diseases recommendations and offer a curative approach to selected patients with intermediate and advanced stage hepatocellular carcinoma.
Saccharomyces genome database informs human biology
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Karra, Kalpana; Binkley, Gail; Simison, Matt; Miyasato, Stuart R
2018-01-01
Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and to provide this wealth of information to advance, in many ways, research on other organisms, even those as evolutionarily distant as humans. To build such a bridge between biological kingdoms, SGD is curating data regarding yeast-human complementation, in which a human gene can successfully replace the function of a yeast gene, and/or vice versa. These data are manually curated from published literature, made available for download, and incorporated into a variety of analysis tools provided by SGD. PMID:29140510
NASA Technical Reports Server (NTRS)
Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K. R.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.
2017-01-01
NASA's vast and growing collections of astromaterials are both scientifically and culturally significant, requiring unique preservation strategies that need to be recurrently updated to contemporary technological capabilities and increasing accessibility demands. New technologies have made it possible to advance documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. Our interdisciplinary team has developed a method to create 3D Virtual Astromaterials Samples (VAS) of the existing collections of Apollo Lunar Samples and Antarctic Meteorites. Research-grade 3D VAS will virtually put these samples in the hands of researchers and educators worldwide, increasing accessibility and visibility of these significant collections. With new sample return missions on the horizon, it is of primary importance to develop advanced curation standards for documentation and visualization methodologies.
TREATING HEMOGLOBINOPATHIES USING GENE CORRECTION APPROACHES: PROMISES AND CHALLENGES
Cottle, Renee N.; Lee, Ciaran M.; Bao, Gang
2016-01-01
Hemoglobinopathies are genetic disorders caused by aberrant hemoglobin expression or structure changes, resulting in severe mortality and health disparities worldwide. Sickle cell disease (SCD) and β-thalassemia, the most common forms of hemoglobinopathies, are typically treated using transfusions and pharmacological agents. Allogeneic hematopoietic stem cell transplantation is the only curative therapy, but has limited clinical applicability. Although gene therapy approaches have been proposed based on the insertion and forced expression of wild-type or anti-sickling β-globin variants, safety concerns may impede their clinical application. A novel curative approach is nuclease-based gene correction, which involves the application of precision genome editing tools to correct the disease-causing mutation. This review describes the development and potential application of gene therapy and precision genome editing approaches for treating SCD and β-thalassemia. The opportunities and challenges in advancing a curative therapy for hemoglobinopathies are also discussed. PMID:27314256
NASA Astrophysics Data System (ADS)
Wang, T.; Branch, B. D.
2013-12-01
Earth Science research data, its data management, informatics processing and its data curation are valuable in allowing earth scientists to make new discoveries. But how to actively manage these research assets to ensure them safe and secure, accessible and reusable for long term is a big challenge. Nowadays, the data deluge makes this challenge become even more difficult. To address the growing demand for managing earth science data, the Council on Library and Information Resources (CLIR) partners with the Library and Technology Services (LTS) of Lehigh University and Purdue University Libraries (PUL) on hosting postdoctoral fellows in data curation activity. This inter-disciplinary fellowship program funded by the SLOAN Foundation innovatively connects university libraries and earth science departments and provides earth science Ph.D.'s opportunities to use their research experiences in earth science and data curation trainings received during their fellowship to explore best practices for research data management in earth science. In the process of exploring best practices for data curation in earth science, the CLIR Data Curation Fellows have accumulated rich experiences and insights on the data management behaviors and needs of earth scientists. Specifically, Ting Wang, the postdoctoral fellow at Lehigh University has worked together with the LTS support team for the College of Arts and Sciences, Web Specialists and the High Performance Computing Team, to assess and meet the data management needs of researchers at the Department of Earth and Environmental Sciences (EES). By interviewing the faculty members and graduate students at EES, the fellow has identified a variety of data-related challenges at different research fields of earth science, such as climate, ecology, geochemistry, geomorphology, etc. The investigation findings of the fellow also support the LTS for developing campus infrastructure for long-term data management in the sciences. Likewise, Benjamin D. Branch, the postdoctoral fellow at PUL conducted GIS (Geographic Information Systems) data curation interviews and worked closely with the GIS Information Specialist towards GIS-related instructional programs in order to recognize the data management needs in GIS research. Conceptually, the research implemented grounded theory approach of campus wide interviews for spatial GIS inquiry. To date, research analysis of a subset of 32 individual interviews with faculty, graduate students, or geospatial staff users is underway with the intent of publication. Collectively, CLIR fellowship program should work to expand the capacity and job resiliency of the library as necessary vehicle of institutional competitiveness via its prominence in data services for future consideration in the areas of data science, data curation, data rescue and collaborative support of the scientific community. In addition, the digital data service aspects of library transformation may be showcased in the results of the fellows' accomplishments.
A justification for semantic training in data curation frameworks development
NASA Astrophysics Data System (ADS)
Ma, X.; Branch, B. D.; Wegner, K.
2013-12-01
In the complex data curation activities involving proper data access, data use optimization and data rescue, opportunities exist where underlying skills in semantics may play a crucial role in data curation professionals ranging from data scientists, to informaticists, to librarians. Here, We provide a conceptualization of semantics use in the education data curation framework (EDCF) [1] under development by Purdue University and endorsed by the GLOBE program [2] for further development and application. Our work shows that a comprehensive data science training includes both spatial and non-spatial data, where both categories are promoted by standard efforts of organizations such as the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C), as well as organizations such as the Federation of Earth Science Information Partners (ESIP) that share knowledge and propagate best practices in applications. Outside the context of EDCF, semantics training may be same critical to such data scientists, informaticists or librarians in other types of data curation activity. Past works by the authors have suggested that such data science should augment an ontological literacy where data science may become sustainable as a discipline. As more datasets are being published as open data [3] and made linked to each other, i.e., in the Resource Description Framework (RDF) format, or at least their metadata are being published in such a way, vocabularies and ontologies of various domains are being created and used in the data management, such as the AGROVOC [4] for agriculture and the GCMD keywords [5] and CLEAN vocabulary [6] for climate sciences. The new generation of data scientist should be aware of those technologies and receive training where appropriate to incorporate those technologies into their reforming daily works. References [1] Branch, B.D., Fosmire, M., 2012. The role of interdisciplinary GIS and data curation librarians in enhancing authentic scientific research in the classroom. American Geophysical Union 2013 Fall Meeting, San Francisco, CA, USA. Abstract# ED43A-0727 [2] http://www.globe.gov [3] http://www.whitehouse.gov/sites/default/files/omb/memoranda/2013/m-13-13.pdf [4] http://aims.fao.org/standards/agrovoc [5] http://gcmd.nasa.gov/learn/keyword_list.html [6] http://cleanet.org/clean/about/climate_energy_.html
Dow, Geoffrey S; Gettayacamin, Montip; Hansukjariya, Pranee; Imerbsin, Rawiwan; Komcharoen, Srawuth; Sattabongkot, Jetsumon; Kyle, Dennis; Milhous, Wilbur; Cozens, Simon; Kenworthy, David; Miller, Anne; Veazey, Jim; Ohrt, Colin
2011-07-29
Tafenoquine is an 8-aminoquinoline being developed for radical cure (blood and liver stage elimination) of Plasmodium vivax. During monotherapy treatment, the compound exhibits slow parasite and fever clearance times, and toxicity in glucose-6-phosphate dehydrogenase (G6PD) deficiency is a concern. Combination with other antimalarials may mitigate these concerns. In 2005, the radical curative efficacy of tafenoquine combinations was investigated in Plasmodium cynomolgi-infected naïve Indian-origin Rhesus monkeys. In the first cohort, groups of two monkeys were treated with a three-day regimen of tafenoquine at different doses alone and in combination with a three-day chloroquine regimen to determine the minimum curative dose (MCD). In the second cohort, the radical curative efficacy of a single-day regimen of tafenoquine-mefloquine was compared to that of two three-day regimens comprising tafenoquine at its MCD with chloroquine or artemether-lumefantrine in groups of six monkeys. In a final cohort, the efficacy of the MCD of tafenoquine against hypnozoites alone and in combination with chloroquine was investigated in groups of six monkeys after quinine pre-treatment to eliminate asexual parasites. Plasma tafenoquine, chloroquine and desethylchloroquine concentrations were determined by LC-MS in order to compare doses of the drugs to those used clinically in humans. The total MCD of tafenoquine required in combination regimens for radical cure was ten-fold lower (1.8 mg/kg versus 18 mg/kg) than for monotherapy. This regimen (1.8 mg/kg) was equally efficacious as monotherapy or in combination with chloroquine after quinine pre-treatment to eliminate asexual stages. The same dose of (1.8 mg/kg) was radically curative in combination with artemether-lumefantrine. Tafenoquine was also radically curative when combined with mefloquine. The MCD of tafenoquine monotherapy for radical cure (18 mg/kg) appears to be biologically equivalent to a 600-1200 mg dose in humans. At its MCD in combination with blood schizonticidal drugs (1.8 mg/kg), the maximum observed plasma concentrations were substantially lower than (20-84 versus 550-1,100 ng/ml) after administration of 1, 200 mg in clinical studies. Ten-fold lower clinical doses of tafenoquine than used in prior studies may be effective against P. vivax hypnozoites if the drug is deployed in combination with effective blood-schizonticidal drugs.
2011-01-01
Background Tafenoquine is an 8-aminoquinoline being developed for radical cure (blood and liver stage elimination) of Plasmodium vivax. During monotherapy treatment, the compound exhibits slow parasite and fever clearance times, and toxicity in glucose-6-phosphate dehydrogenase (G6PD) deficiency is a concern. Combination with other antimalarials may mitigate these concerns. Methods In 2005, the radical curative efficacy of tafenoquine combinations was investigated in Plasmodium cynomolgi-infected naïve Indian-origin Rhesus monkeys. In the first cohort, groups of two monkeys were treated with a three-day regimen of tafenoquine at different doses alone and in combination with a three-day chloroquine regimen to determine the minimum curative dose (MCD). In the second cohort, the radical curative efficacy of a single-day regimen of tafenoquine-mefloquine was compared to that of two three-day regimens comprising tafenoquine at its MCD with chloroquine or artemether-lumefantrine in groups of six monkeys. In a final cohort, the efficacy of the MCD of tafenoquine against hypnozoites alone and in combination with chloroquine was investigated in groups of six monkeys after quinine pre-treatment to eliminate asexual parasites. Plasma tafenoquine, chloroquine and desethylchloroquine concentrations were determined by LC-MS in order to compare doses of the drugs to those used clinically in humans. Results The total MCD of tafenoquine required in combination regimens for radical cure was ten-fold lower (1.8 mg/kg versus 18 mg/kg) than for monotherapy. This regimen (1.8 mg/kg) was equally efficacious as monotherapy or in combination with chloroquine after quinine pre-treatment to eliminate asexual stages. The same dose of (1.8 mg/kg) was radically curative in combination with artemether-lumefantrine. Tafenoquine was also radically curative when combined with mefloquine. The MCD of tafenoquine monotherapy for radical cure (18 mg/kg) appears to be biologically equivalent to a 600-1200 mg dose in humans. At its MCD in combination with blood schizonticidal drugs (1.8 mg/kg), the maximum observed plasma concentrations were substantially lower than (20-84 versus 550-1,100 ng/ml) after administration of 1, 200 mg in clinical studies. Conclusions Ten-fold lower clinical doses of tafenoquine than used in prior studies may be effective against P. vivax hypnozoites if the drug is deployed in combination with effective blood-schizonticidal drugs. PMID:21801400
Körner, Hartwig; Söreide, Kjetil; Stokkeland, Pål J; Söreide, Jon Arne
2005-03-01
In this study, we analyzed the Norwegian guidelines for systematic follow-up after curative colorectal cancer surgery in a large single institution. Three hundred fourteen consecutive unselected patients undergoing curative surgery for colorectal cancer between 1996 and 1999 were studied with regard to asymptomatic curable recurrence, compliance with the program, and cost. Follow-up included carcinoembryonic antigen (CEA) interval measurements, colonoscopy, ultrasonography of the liver, and radiography of the chest. In 194 (62%) of the patients, follow-up was conducted according to the Norwegian guidelines. Twenty-one patients (11%) were operated on for curable recurrence, and 18 patients (9%) were disease free after curative surgery for recurrence at evaluation. Four metachronous tumors (2%) were found. CEA interval measurement had to be made most frequently (534 tests needed) to detect one asymptomatic curable recurrence. Follow-up program did not influence cancer-specific survival. Overall compliance with the surveillance program was 66%, being lowest for colonoscopy (55%) and highest for ultrasonography of the liver (85%). The total program cost was 228,117 euro (US 280,994 dollars), translating into 20,530 euro (US 25,289 dollars) for one surviving patient after surgery for recurrence. The total diagnosis yield with regard to disease-free survival after surgery for recurrence was 9%. Compliance was moderate. Whether the continuing implementation of such program and cost are justified should be debated.
NASA Astrophysics Data System (ADS)
Pereira, Stephen P.; Matull, W. Rudiger; Dhar, Dipok K.; Ayaru, Laskshmana; Sandanayake, Neomal S.; Chapman, Michael H.
2009-06-01
There is a need for better management strategies to improve survival and quality of life in patients with biliary tract cancer (BTC). We compared treatment outcomes in 321 patients (median age 65 years, range 29-102; F:M; 1:1) with a final diagnosis of BTC (cholangiocarcinoma n=237, gallbladder cancer n=84) seen in a tertiary referral cancer centre between 1998-2007. Of 89 (28%) patients who underwent surgical intervention with curative intent, 38% had R0 resections and had the most favourable outcome, with a 3 year survival of 57%. Even though PDT patients had more advanced clinical T-stages, their survival was similar to those treated with attempted curative surgery which resulted in R1/2 resections (median survival 12 vs. 13 months, ns). In a subgroup of 36 patients with locally advanced BTC treated with PDT as part of a prospective phase II study, the median survival was 12 (range 2-51) months, compared with 5 months in matched historical controls treated with stenting alone (p < 0.0001). In this large UK series, long-term survival with BTC was only achieved in surgical patients with R0 resection margins. Palliative PDT resulted in similar survival to those with curatively intended R1/R2 resections.
Jayakrishnan, Thejus T; Nadeem, Hasan; Groeschl, Ryan T; George, Ben; Thomas, James P; Ritch, Paul S; Christians, Kathleen K; Tsai, Susan; Evans, Douglas B; Pappas, Sam G; Gamblin, T Clark; Turaga, Kiran K
2015-01-01
Objectives Laparoscopy is recommended to detect radiographically occult metastases in patients with pancreatic cancer before curative resection. This study was conducted to test the hypothesis that diagnostic laparoscopy (DL) is cost-effective in patients undergoing curative resection with or without neoadjuvant therapy (NAT). Methods Decision tree modelling compared routine DL with exploratory laparotomy (ExLap) at the time of curative resection in resectable cancer treated with surgery first, (SF) and borderline resectable cancer treated with NAT. Costs (US$) from the payer's perspective, quality-adjusted life months (QALMs) and incremental cost-effectiveness ratios (ICERs) were calculated. Base case estimates and multi-way sensitivity analyses were performed. Willingness to pay (WtP) was US$4166/QALM (or US$50 000/quality-adjusted life year). Results Base case costs were US$34 921 for ExLap and US$33 442 for DL in SF patients, and US$39 633 for ExLap and US$39 713 for DL in NAT patients. Routine DL is the dominant (preferred) strategy in both treatment types: it allows for cost reductions of US$10 695/QALM in SF and US$4158/QALM in NAT patients. Conclusions The present analysis supports the cost-effectiveness of routine DL before curative resection in pancreatic cancer patients treated with either SF or NAT. PMID:25123702
NASA Astrophysics Data System (ADS)
le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.
2017-12-01
The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.
Ning, Shangwei; Zhang, Jizhou; Wang, Peng; Zhi, Hui; Wang, Jianjian; Liu, Yue; Gao, Yue; Guo, Maoni; Yue, Ming; Wang, Lihua; Li, Xia
2016-01-04
Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Enabling Data-as- a-Service (DaaS) - Biggest Challenge of Geoscience Australia
NASA Astrophysics Data System (ADS)
Bastrakova, I.; Kemp, C.; Car, N. J.
2016-12-01
Geoscience Australia (GA) is recognised and respected as the national repository and steward of multiple national significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Provision of Data-as-a-Service is both GA's key responsibility and core business. Through the Science First Transformation Program GA is undergoing a significant rethinking of its data architecture, curation and access to support the Digital Science capability for which DaaS forms both a dependency and underpins its implementation. DaaS, being a service, means we can deliver its outputs in multiple ways thus providing users with data on demand in ready-for-consumption forms. We can then to reuse prebuilt data constructions to allow self-serviced integration of data underpinned by dynamic query tools. In GA's context examples of DaaS are the Australian Geoscience Data Cube, the Foundation Spatial Data Framework and data served through several Virtual Laboratories. We have implemented a three-layered architecture for DaaS in order to store and manage the data while honouring the semantics of Scientific Data Models defined by subject matter experts and GA's Enterprise Data Architecture as well as retain that delivery flexibility. The foundation layer of DaaS is Canonical Datasets, which are optimised for a long-term data stewardship and curation. Data is well structured, standardised, described and audited. All data creation and editing happen within this layer. The middle Data Transformation layer assists with transformation of data from Canonical Datasets to data integration layer. It provides mechanisms for multi-format and multi-technology data transformation. The top Data Integration layer is optimised for data access. Data can be easily reused and repurposed; data formats made available are optimised for scientific computing and adjusted for access by multiple applications, tools and libraries. Moving to DaaS enables GA to increase data alertness, generate new capabilities and be prepared for emerging technological challengers.
OzFlux data: network integration from collection to curation
NASA Astrophysics Data System (ADS)
Isaac, Peter; Cleverly, James; McHugh, Ian; van Gorsel, Eva; Ewenz, Cacilia; Beringer, Jason
2017-06-01
Measurement of the exchange of energy and mass between the surface and the atmospheric boundary-layer by the eddy covariance technique has undergone great change in the last 2 decades. Early studies of these exchanges were confined to brief field campaigns in carefully controlled conditions followed by months of data analysis. Current practice is to run tower-based eddy covariance systems continuously over several years due to the need for continuous monitoring as part of a global effort to develop local-, regional-, continental- and global-scale budgets of carbon, water and energy. Efficient methods of processing the increased quantities of data are needed to maximise the time available for analysis and interpretation. Standardised methods are needed to remove differences in data processing as possible contributors to observed spatial variability. Furthermore, public availability of these data sets assists with undertaking global research efforts. The OzFlux data path has been developed (i) to provide a standard set of quality control and post-processing tools across the network, thereby facilitating inter-site integration and spatial comparisons; (ii) to increase the time available to researchers for analysis and interpretation by reducing the time spent collecting and processing data; (iii) to propagate both data and metadata to the final product; and (iv) to facilitate the use of the OzFlux data by adopting a standard file format and making the data available from web-based portals. Discovery of the OzFlux data set is facilitated through incorporation in FLUXNET data syntheses and the publication of collection metadata via the RIF-CS format. This paper serves two purposes. The first is to describe the data sets, along with their quality control and post-processing, for the other papers of this Special Issue. The second is to provide an example of one solution to the data collection and curation challenges that are encountered by similar flux tower networks worldwide.
Defining the Chance of Statistical Cure Among Patients with Extrahepatic Biliary Tract Cancer.
Spolverato, Gaya; Bagante, Fabio; Ethun, Cecilia G; Poultsides, George; Tran, Thuy; Idrees, Kamran; Isom, Chelsea A; Fields, Ryan C; Krasnick, Bradley; Winslow, Emily; Cho, Clifford; Martin, Robert C G; Scoggins, Charles R; Shen, Perry; Mogal, Harveshp D; Schmidt, Carl; Beal, Eliza; Hatzaras, Ioannis; Shenoy, Rivfka; Maithel, Shishir K; Pawlik, Timothy M
2017-01-01
While surgery offers the best curative-intent treatment, many patients with biliary tract malignancies have poor long-term outcomes. We sought to apply a non-mixture cure model to calculate the cure fraction and the time to cure after surgery of patients with peri-hilar cholangiocarcinoma (PHCC) or gallbladder cancer (GBC). Using the Extrahepatic Biliary Malignancy Consortium, 576 patients who underwent curative-intent surgery for gallbladder carcinoma or peri-hilar cholangiocarcinoma between 1998 and 2014 at 10 major hepatobiliary institutions were identified and included in the analysis. A non-mixture cure model was adopted to compare mortality after surgery to the mortality expected for the general population matched by sex and age. The median and 5-year overall survival (OS) were 1.9 years (IQR, 0.9-4.9) and 23.9 % (95 % CI, 19.6-28.6). Among all patients with PHCC or GBC, the probability of being cured after surgery was 14.5 % (95 % CI, 8.7-23.2); the time to cure was 9.7 years and the median survival of uncured patients was 1.8 years. Determinants of cure probabilities included lymph node metastasis and CA 19.9 level (p ≤ 0.05). The cure fraction for patients with a CA 19.9 < 50 U/ml and no lymph nodes metastases were 39.0 % versus only 5.1 % among patients with a CA 19.9 ≥ 50 who also had lymph node metastasis. Examining an "all comer" cohort, <15 % of patients with PHCC or GBC could be considered cured after surgery. Factors such CA 19.9 level and lymph node metastasis independently predicted long-term outcome. Estimating the odds of statistical cure following surgery for biliary tract cancer can assist in decision-making as well as inform discussions around survivorship.
Defining the Chance of Statistical Cure Among Patients with Extrahepatic Biliary Tract Cancer
Spolverato, Gaya; Bagante, Fabio; Ethun, Cecilia G.; Poultsides, George; Tran, Thuy; Idrees, Kamran; Isom, Chelsea A.; Fields, Ryan C.; Krasnick, Bradley; Winslow, Emily; Cho, Clifford; Martin, Robert C. G.; Scoggins, Charles R.; Shen, Perry; Mogal, Harveshp D.; Schmidt, Carl; Beal, Eliza; Hatzaras, Ioannis; Shenoy, Rivfka; Maithel, Shishir K.; Pawlik, Timothy M.
2017-01-01
Background While surgery offers the best curative-intent treatment, many patients with biliary tract malignancies have poor long-term outcomes. We sought to apply a non-mixture cure model to calculate the cure fraction and the time to cure after surgery of patients with peri-hilar cholangiocarcinoma (PHCC) or gallbladder cancer (GBC). Methods Using the Extrahepatic Biliary Malignancy Consortium, 576 patients who underwent curative-intent surgery for gallbladder carcinoma or peri-hilar cholangiocarcinoma between 1998 and 2014 at 10 major hepatobiliary institutions were identified and included in the analysis. A non-mixture cure model was adopted to compare mortality after surgery to the mortality expected for the general population matched by sex and age. Results The median and 5-year overall survival (OS) were 1.9 years (IQR, 0.9–4.9) and 23.9 % (95 % CI, 19.6–28.6). Among all patients with PHCC or GBC, the probability of being cured after surgery was 14.5 % (95 % CI, 8.7–23.2); the time to cure was 9.7 years and the median survival of uncured patients was 1.8 years. Determinants of cure probabilities included lymph node metastasis and CA 19.9 level (p ≤ 0.05). The cure fraction for patients with a CA 19.9 < 50 U/ml and no lymph nodes metastases were 39.0 % versus only 5.1 % among patients with a CA 19.9 ≥ 50 who also had lymph node metastasis. Conclusions Examining an “all comer” cohort, <15 % of patients with PHCC or GBC could be considered cured after surgery. Factors such CA 19.9 level and lymph node metastasis independently predicted long-term outcome. Estimating the odds of statistical cure following surgery for biliary tract cancer can assist in decision-making as well as inform discussions around survivorship. PMID:27549595
The Role of Content Aggregators In GEOValue
NASA Astrophysics Data System (ADS)
Wright, D. J.; Breyer, S.; Hogeweg, M.; Foust, J.; Jordan, L.; Plunkett, G.
2016-12-01
Data (aka content) in the form of numbers or layers, and transformed into information by way of maps, images, graphs, charts, tables, even stories, are foundational for a myriad of decision-makers. Recent advances in information technology, as well as civil remote sensing of the Earth, are rapidly allowing us to advance beyond mere static data collection and archiving, further enabling information awareness and understanding, and leading us towards knowledge and better decision making. However, such volumes, velocities, and varieties of data streams also bring with them serious dilemmas with regard to effective organization, cataloging, and easy access. This is where the role of aggregator comes in, with their provision of the necessary sustainability and reliability of information via proven, well-engineered platforms, all with the necessary interoperability and openness as guaranteed through the adoption of established standards. Information technology (IT) giants such as Google, Facebook, IBM and Apple are well known for aggregating just about every aspect of life in modern society, from our music to our mood swings. A use case of the Environmental Systems Research Institute (aka Esri) is presented as a geospatial aggregator. It has over the years, compiled, assembled and produced a carefully curated library of public content into a global "Living Atlas of the World," organized into different themes such as Earth observation, transportation, demographics, natural hazards, ecological land units, elevation, and more. Among a myriad of decision scenarios with this content to be presented is the tracking of sea ice in the Arctic, estimating potential impact to shipping lanes or coastal infrastructure, and forecasting future conditions. Esri as the main aggregator of Living Atlas content continues to welcome not only contributors who will publish new content to be included in this global Atlas, but fellow curators who will assist in reviewing, organizing, and even approving that content, thereby helping to increase and ensure its quality over time.
Kumar, Jayant; Reccia, Isabella; Sodergren, Mikael H; Kusano, Tomokazu; Zanellato, Artur; Pai, Madhava; Spalding, Duncan; Zacharoulis, Dimitris; Habib, Nagy
2018-03-20
Despite careful patient selection and preoperative investigations curative resection rate (R0) in pancreaticoduodenectomy ranges from 15% to 87%. Here we describe a new palliative approach for pancreaticoduodenectomy using a radiofrequency energy device to ablate tumor in situ in patients undergoing R1/R2 resections for locally advanced pancreatic ductal adenocarcinoma where vascular reconstruction was not feasible. There was neither postoperative mortality nor significant morbidity. Each time the ablation lasted less than 15 minutes. Following radiofrequency ablation it was observed that the tumor remnant attached to the vessel had shrunk significantly. In four patients this allowed easier separation and dissection of the ablated tumor from the adherent vessel leading to R1 resection. In the other two patients, the ablated tumor did not separate from vessel due to true tumor invasion and patients had an R2 resection. The ablated remnant part of the tumor was left in situ. Whenever pancreaticoduodenectomy with R0 resection cannot be achieved, this new palliative procedure could be considered in order to facilitate resection and enable maximum destruction in remnant tumors. Six patients with suspected tumor infiltration and where vascular reconstruction was not warranted underwent radiofrequency-assisted pancreaticoduodenectomy for locally advanced pancreatic ductal adenocarcinoma. Radiofrequency was applied across the tumor vertically 5-10 mm from the edge of the mesenteric and portal veins. Following ablation, the duodenum and the head of pancreas were removed after knife excision along the ablated line. The remaining ablated tissue was left in situ attached to the vessel.
[Surgical treatment of pulmonary metastases from colon and rectal cancer].
Togashi, Ken-ichi; Aoki, K; Hirahara, H; Sugawara, M; Oguma, F
2004-09-01
We retrospectively studied the surgical treatment for pulmonary metastases from colon and rectal cancer. A total of 24 patients (9 males and 15 females; mean age 61 years) underwent 29 thoracotomies for metastatic colon carcinoma, while 22 patients (16 males and 6 females; mean age 63 years) underwent 29 thoracotomies for metastatic rectal cancer. The median interval between the primary procedure and lung resection for metastases was 26 months in the patients with colon carcinoma and 32 months in the patients with rectal cancer. In the patients with colon carcinoma, 16 underwent wedge resection or segmentectomy (including 4 video-assisted procedures) and 13 (54%) underwent lobectomy or pneumonectomy. In the patients with rectal cancer, 15 underwent wedge or segmentectomy (including 1 video-assisted procedure), 13 (59%) underwent lobectomy or pneumonectomy, and 1 underwent exploratory thoracotomy. All procedures except exploratory thoracotomy were curative operations. There was no mortality. Overall 5-year survival was 56% (n=46). Five-year survival was 65% for patients with colon metastases (n=24) and 45% for patients with rectal metastases (n=22), and there was no significant difference. Recurrent sites were 4 lungs (36%), 4 livers (36%), 1 bone, 1 uterus, and 1 peritoneum in patients with colon carcimoma, and 10 lungs (43%), 5 brains (22%), 3 livers (13%), 1 bone, and 1 vagina in patients with rectal cancer. Pulmonary resection for metastases from colon carcinoma may have better prognosis than that from rectal cancer. However, further investigation may be required to obtain convincing conclusions.
The Principles for Successful Scientific Data Management Revisited
NASA Astrophysics Data System (ADS)
Walker, R. J.; King, T. A.; Joy, S. P.
2005-12-01
It has been 23 years since the National Research Council's Committee on Data Management and Computation (CODMAC) published its famous list of principles for successful scientific data management that have provided the framework for modern space science data management. CODMAC outlined seven principles: 1. Scientific Involvement in all aspects of space science missions. 2. Scientific Oversight of all scientific data-management activities. 3. Data Availability - Validated data should be made available to the scientific community in a timely manner. They should include appropriate ancillary data, and complete documentation. 4. Facilities - A proper balance between cost and scientific productivity should be maintained. 5. Software - Transportable well documented software should be available to process and analyze the data. 6. Scientific Data Storage - The data should be preserved in retrievable form. 7. Data System Funding - Adequate data funding should be made available at the outset of missions and protected from overruns. In this paper we will review the lessons learned in trying to apply these principles to space derived data. The Planetary Data System created the concept of data curation to carry out the CODMAC principles. Data curators are scientists and technologists who work directly with the mission scientists to create data products. The efficient application of the CODMAC principles requires that data curators and the mission team start early in a mission to plan for data access and archiving. To build the data products the planetary discipline adopted data access and documentation standards and has adhered to them. The data curators and mission team work together to produce data products and make them available. However even with early planning and agreement on standards the needs of the science community frequently far exceed the available resources. This is especially true for smaller principal investigator run missions. We will argue that one way to make data systems for small missions more effective is for the data curators to provide software tools to help develop the mission data system.
NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases.
Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin; Senger, Philipp
2015-01-01
Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article's supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in collaboration with domain disease experts. We elucidate the step-by-step guidelines used to critically prioritize studies from public archives and their metadata curation and discuss the key challenges encountered. Curated metadata for Alzheimer's disease gene expression studies are available for download. Database URL: www.scai.fraunhofer.de/NeuroTransDB.html. © The Author(s) 2015. Published by Oxford University Press.
NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases
Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin
2015-01-01
Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article’s supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in collaboration with domain disease experts. We elucidate the step-by-step guidelines used to critically prioritize studies from public archives and their metadata curation and discuss the key challenges encountered. Curated metadata for Alzheimer’s disease gene expression studies are available for download. Database URL: www.scai.fraunhofer.de/NeuroTransDB.html PMID:26475471
Won, Young-Woong; Joo, Jungnam; Yun, Tak; Lee, Geon-Kook; Han, Ji-Youn; Kim, Heung Tae; Lee, Jin Soo; Kim, Moon Soo; Lee, Jong Mog; Lee, Hyun-Sung; Zo, Jae Ill; Kim, Sohee
2015-05-01
Development of brain metastasis results in a significant reduction in overall survival. However, there is no an effective tool to predict brain metastasis in non-small cell lung cancer (NSCLC) patients. We conducted this study to develop a feasible nomogram that can predict metastasis to the brain as the first relapse site in patients with curatively resected NSCLC. A retrospective review of NSCLC patients who had received curative surgery at National Cancer Center (Goyang, South Korea) between 2001 and 2008 was performed. We chose metastasis to the brain as the first relapse site after curative surgery as the primary endpoint of the study. A nomogram was modeled using logistic regression. Among 1218 patients, brain metastasis as the first relapse developed in 87 patients (7.14%) during the median follow-up of 43.6 months. Occurrence rates of brain metastasis were higher in patients with adenocarcinoma or those with a high pT and pN stage. Younger age appeared to be associated with brain metastasis, but this result was not statistically significant. The final prediction model included histology, smoking status, pT stage, and the interaction between adenocarcinoma and pN stage. The model showed fairly good discriminatory ability with a C-statistic of 69.3% and 69.8% for predicting brain metastasis within 2 years and 5 years, respectively. Internal validation using 2000 bootstrap samples resulted in C-statistics of 67.0% and 67.4% which still indicated good discriminatory performances. The nomogram presented here provides the individual risk estimate of developing metastasis to the brain as the first relapse site in patients with NSCLC who have undergone curative surgery. Surveillance programs or preventive treatment strategies for brain metastasis could be established based on this nomogram. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Morbidity of curative cancer surgery and suicide risk.
Jayakrishnan, Thejus T; Sekigami, Yurie; Rajeev, Rahul; Gamblin, T Clark; Turaga, Kiran K
2017-11-01
Curative cancer operations lead to debility and loss of autonomy in a population vulnerable to suicide death. The extent to which operative intervention impacts suicide risk is not well studied. To examine the effects of morbidity of curative cancer surgeries and prognosis of disease on the risk of suicide in patients with solid tumors. Retrospective cohort study using Surveillance, Epidemiology, and End Results data from 2004 to 2011; multilevel systematic review. General US population. Participants were 482 781 patients diagnosed with malignant neoplasm between 2004 and 2011 who underwent curative cancer surgeries. Death by suicide or self-inflicted injury. Among 482 781 patients that underwent curative cancer surgery, 231 committed suicide (16.58/100 000 person-years [95% confidence interval, CI, 14.54-18.82]). Factors significantly associated with suicide risk included male sex (incidence rate [IR], 27.62; 95% CI, 23.82-31.86) and age >65 years (IR, 22.54; 95% CI, 18.84-26.76). When stratified by 30-day overall postoperative morbidity, a significantly higher incidence of suicide was found for high-morbidity surgeries (IR, 33.30; 95% CI, 26.50-41.33) vs moderate morbidity (IR, 24.27; 95% CI, 18.92-30.69) and low morbidity (IR, 9.81; 95% CI, 7.90-12.04). Unit increase in morbidity was significantly associated with death by suicide (odds ratio, 1.01; 95% CI, 1.00-1.03; P = .02) and decreased suicide-specific survival (hazards ratio, 1.02; 95% CI, 1.00-1.03, P = .01) in prognosis-adjusted models. In this sample of cancer patients in the Surveillance, Epidemiology, and End Results database, patients that undergo high-morbidity surgeries appear most vulnerable to death by suicide. The identification of this high-risk cohort should motivate health care providers and particularly surgeons to adopt screening measures during the postoperative follow-up period for these patients. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.
2015-01-01
Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial samples addresses the increasing demands for accessability to data and contemporary techniques for documentation, which can be realized for both current collections as well as future sample return missions.
Interoperability Across the Stewardship Spectrum in the DataONE Repository Federation
NASA Astrophysics Data System (ADS)
Jones, M. B.; Vieglais, D.; Wilson, B. E.
2016-12-01
Thousands of earth and environmental science repositories serve many researchers and communities, each with their own community and legal mandates, sustainability models, and historical infrastructure. These repositories span the stewardship spectrum from highly curated collections that employ large numbers of staff members to review and improve data, to small, minimal budget repositories that accept data caveat emptor and where all responsibility for quality lies with the submitter. Each repository fills a niche, providing services that meet the stewardship tradeoffs of one or more communities. We have reviewed these stewardship tradeoffs for several DataONE member repositories ranging from minimally (KNB) to highly curated (Arctic Data Center), as well as general purpose (Dryad) to highly discipline or project specific (NEON). The rationale behind different levels of stewardship reflect resolution of these tradeoffs. Some repositories aim to encourage extensive uptake by keeping processes simple and minimizing the amount of information collected, but this limits the long-term utility of the data and the search, discovery, and integration systems that are possible. Other repositories require extensive metadata input, review, and assessment, allowing for excellent preservation, discovery, and integration but at the cost of significant time for submitters and expense for curatorial staff. DataONE recognizes these different levels of curation, and attempts to embrace them to create a federation that is useful across the stewardship spectrum. DataONE provides a tiered model for repositories with growing utility of DataONE services at higher tiers of curation. The lowest tier supports read-only access to data and requires little more than title and contact metadata. Repositories can gradually phase in support for higher levels of metadata and services as needed. These tiered capabilities are possible through flexible support for multiple metadata standards and services, where repositories can incrementally increase their requirements as they want to satisfy more use cases. Within DataONE, metadata search services support minimal metadata models, but significantly expanded precision and recall become possible when repositories provide more extensively curated metadata.
Li-Shuai, Qu; Yu-Yan, Chen; Hai-Feng, Zhang; Jin-Xia, Liu; Cui-Hua, Lu
2017-01-01
Abstract The relationship between hepatitis B virus (HBV) and the prognosis of hepatocellular carcinoma (HCC) after surgery remains uncertain. A retrospective cohort study was performed to evaluate the impact of pre-S deletions, T1762/A1764, and A1896 mutations on prognosis of HCC after curative resection. A total of 113 patients with positive serum HBV DNA (>200 IU/mL) who had underwent curative resection of pathologically proven HCC were recruited to determine the risk factors affecting the prognosis. The median follow-up time was 36.5 months and recurrence was detected in 67 patients (59.3%). The cumulative recurrence rates and overall survival rates at 1-, 3-, and 5-year after curative resection were 18.0%, 49.7%, 70.3%, and 93.7%, 61.0%, 42.5%, respectively. Patients with pre-S deletions showed significantly higher recurrence rates compared with those with wild type infection (HR: 1.822, P = .018), but not related with a significantly poor survival (HR: 1.388, P = .235). Subgroup analysis indicated that the patients with type III deletion had significant higher tumor recurrence rates than other deletion types (HR: 2.211, 95% confidence intervals [CI]: 1.008–4.846, P = .048). Multivariate analysis revealed that pre-S deletion, tumor size >3 cm in diameter, and the presence of microvascular invasion were independent risk factors for tumor recurrence. HBV pre-S deletions were found to be clustered primarily in the 5′ end of pre-S2 region and were more often found between amino acids 120 and 142 of the pre-S2 domain. The domains most frequently potentially involved were the transactivator domain in pre-S2 and polymerized human serum albumin binding site. Our cohort showed that pre-S deletions at the time of resection could predict tumor recurrence in HCC patients after curative resection. PMID:29069001
[Lettuce, lactuca sp., as a medicinal plant in polish publications of the 19th century].
Trojanowska, Anna
2005-01-01
Mentions of lettuce Lactuca sp. that have appeared since antiquity contained similar information on its curative properties, but such properties were ascribed to different species or varieties. Apart from the wild and poisonous lettuce, also garden or common lettuce were identified as having curative action, and some publications lacked information enabling the precise identification of the lettuce in question. In the 19th century, attempts were made to put some order into the knowledge of lettuce as a medicinal plant. Information contained in Polish medical studies of the 19th century on lettuce points to the poisonous species, Lactuca virosa, and the common or garden lettuce, Lactuca sativa v. Lactuca hortensis, as being used as a medicinal plant. In that period, lettuce and especially the the desiccated lactescent juice obtained from it, lactucarium, were considered to be an intoxicant, and were used as a sedative and an analgesic. The action of the substance was weaker than that of opium but free of the side-effects, and medical practice showed that in some cases lactucarium produced better curative effects than opium. To corroborate those properties of lettuce and its lactescent juice, studies were undertaken to find the substance responsible for the curative effects of the juice. However, such studies failed to produce the expected results, and the component responsible for the curative properties of letuce was not identified. Medical practice thus had to restrict itself to the uses of the desiccated lactescent juice and extracts obtained from it. The possibility of obtaining lactucarium from plants cultivated in Poland caused Polish pharmacists and physicians to take an interest in the stuff and launch their own research of lettuce and the lactescent juice obtained from it. Results of research on lettuce were published in 19th-century journals by, among others, Jan Fryderyk Wolfgang, Florian Sawiczewski and Józef Orkisz.
Data Curation Education in Research Centers (DCERC)
NASA Astrophysics Data System (ADS)
Marlino, M. R.; Mayernik, M. S.; Kelly, K.; Allard, S.; Tenopir, C.; Palmer, C.; Varvel, V. E., Jr.
2012-12-01
Digital data both enable and constrain scientific research. Scientists are enabled by digital data to develop new research methods, utilize new data sources, and investigate new topics, but they also face new data collection, management, and preservation burdens. The current data workforce consists primarily of scientists who receive little formal training in data management and data managers who are typically educated through on-the-job training. The Data Curation Education in Research Centers (DCERC) program is investigating a new model for educating data professionals to contribute to scientific research. DCERC is a collaboration between the University of Illinois at Urbana-Champaign Graduate School of Library and Information Science, the University of Tennessee School of Information Sciences, and the National Center for Atmospheric Research. The program is organized around a foundations course in data curation and provides field experiences in research and data centers for both master's and doctoral students. This presentation will outline the aims and the structure of the DCERC program and discuss results and lessons learned from the first set of summer internships in 2012. Four masters students participated and worked with both data mentors and science mentors, gaining first hand experiences in the issues, methods, and challenges of scientific data curation. They engaged in a diverse set of topics, including climate model metadata, observational data management workflows, and data cleaning, documentation, and ingest processes within a data archive. The students learned current data management practices and challenges while developing expertise and conducting research. They also made important contributions to NCAR data and science teams by evaluating data management workflows and processes, preparing data sets to be archived, and developing recommendations for particular data management activities. The master's student interns will return in summer of 2013, and two Ph.D. students will conduct data curation-related dissertation fieldwork during the 2013-2014 academic year.
NetPath: a public resource of curated signal transduction pathways
2010-01-01
We have developed NetPath as a resource of curated human signaling pathways. As an initial step, NetPath provides detailed maps of a number of immune signaling pathways, which include approximately 1,600 reactions annotated from the literature and more than 2,800 instances of transcriptionally regulated genes - all linked to over 5,500 published articles. We anticipate NetPath to become a consolidated resource for human signaling pathways that should enable systems biology approaches. PMID:20067622
2013-01-01
Background It is widely recognized that spiritual care plays an important role in physical and psychosocial well-being of cancer patients, but there is little evidence based research on the effects of spiritual care. We will conduct a randomized controlled trial on spiritual care using a brief structured interview scheme supported by an e-application. The aim is to examine whether an assisted reflection on life events and ultimate life goals can improve quality of life of cancer patients. Methods/Design Based on the findings of our previous research, we have developed a brief interview model that allows spiritual counsellors to explore, explicate and discuss life events and ultimate life goals with cancer patients. To support the interview, we created an e-application for a PC or tablet. To examine whether this assisted reflection improves quality of life we will conduct a randomized trial. Patients with advanced cancer not amenable to curative treatment options will be randomized to either the intervention or the control group. The intervention group will have two consultations with a spiritual counsellor using the interview scheme supported by the e-application. The control group will receive care as usual. At baseline and one and three months after randomization all patients fill out questionnaires regarding quality of life, spiritual wellbeing, empowerment, satisfaction with life, anxiety and depression and health care consumption. Discussion Having insight into one’s ultimate life goals may help integrating a life event such as cancer into one’s life story. This is the first randomized controlled trial to evaluate the role of an assisted structured reflection on ultimate life goals to improve patients’ quality of life and spiritual well being. The intervention is brief and based on concepts and skills that spiritual counsellors are familiar with, it can be easily implemented in routine patient care and incorporated in guidelines on spiritual care. Trial registration The study is registered at ClinicalTrials.gov: NCT01830075 PMID:23889978
NASA Technical Reports Server (NTRS)
Zolensky, Michael E.
2011-01-01
I describe lessons learned from my participation on the Hayabusa Mission, which returned regolith grains from asteroid Itokawa in 2010 [1], comparing this with the recently returned Stardust Spacecraft, which sampled the Jupiter Family comet Wild 2. Spacecraft Recovery Operations: The mission Science and Curation teams must actively participate in planning, testing and implementing spacecraft recovery operations. The crash of the Genesis spacecraft underscored the importance of thinking through multiple contingency scenarios and practicing field recovery for these potential circumstances. Having the contingency supplies on-hand was critical, and at least one full year of planning for Stardust and Hayabusa recovery operations was necessary. Care must be taken to coordinate recovery operations with local organizations and inform relevant government bodies well in advance. Recovery plans for both Stardust and Hayabusa had to be adjusted for unexpectedly wet landing site conditions. Documentation of every step of spacecraft recovery and deintegration was necessary, and collection and analysis of launch and landing site soils was critical. We found the operation of the Woomera Text Range (South Australia) to be excellent in the case of Hayabusa, and in many respects this site is superior to the Utah Test and Training Range (used for Stardust) in the USA. Recovery operations for all recovered spacecraft suffered from the lack of a hermetic seal for the samples. Mission engineers should be pushed to provide hermetic seals for returned samples. Sample Curation Issues: More than two full years were required to prepare curation facilities for Stardust and Hayabusa. Despite this seemingly adequate lead time, major changes to curation procedures were required once the actual state of the returned samples became apparent. Sample databases must be fully implemented before sample return for Stardust we did not adequately think through all of the possible sub sampling and analytical activities before settling on a database design - Hayabusa has done a better job of this. Also, analysis teams must not be permitted to devise their own sample naming schemes. The sample handling and storage facilities for Hayabusa are the finest that exist, and we are now modifying Stardust curation to take advantage of the Hayabusa facilities. Remote storage of a sample subset is desirable. Preliminary Examination (PE) of Samples: There must be some determination of the state and quantity of the returned samples, to provide a necessary guide to persons requesting samples and oversight committees tasked with sample curation oversight. Hayabusa s sample PE, which is called HASPET, was designed so that late additions to the analysis protocols were possible, as new analytical techniques became available. A small but representative number of recovered grains are being subjected to in-depth characterization. The bulk of the recovered samples are being left untouched, to limit contamination. The HASPET plan takes maximum advantage of the unique strengths of sample return missions
Quality of Computationally Inferred Gene Ontology Annotations
Škunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe
2012-01-01
Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439
Sun, Da-Xin; Tan, Xiao-Dong; Gao, Feng; Xu, Jin; Cui, Dong-Xu; Dai, Xian-Wei
2015-01-01
Background Postoperative bile leak is a major surgical morbidity after curative resection with hepaticojejunostomy for hilar cholangiocarcinoma, especially in Bismuth-Corlette types III and IV. This retrospective study assessed the effectiveness and safety of an autologous hepatic round ligament flap (AHRLF) for reducing bile leak after hilar hepaticojejunostomy. Methods Nine type III and IV hilar cholangiocarcinoma patients were consecutively hospitalized for elective perihilar partial hepatectomy with hilar hepaticojejunostomy using an AHRLF between October 2009 and September 2013. The AHRLF was harvested to reinforce the perihilar hepaticojejunostomy. Main outcome measures included operative time, blood loss, postoperative recovery times, morbidity, bile leak, R0 resection rate, and overall survival. Results All patients underwent uneventful R0 resection with hilar hepaticojejunostomy. No patient experienced postoperative bile leak. Conclusions The AHRLF was associated with lack of bile leak after curative perihilar hepatectomy with hepaticojejunostomy for hilar cholangiocarcinoma, without compromising oncologic safety, and is recommended in selected patients. PMID:25938440
Virtual Collections: An Earth Science Data Curation Service
NASA Astrophysics Data System (ADS)
Bugbee, K.; Ramachandran, R.; Maskey, M.; Gatlin, P. N.
2016-12-01
The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
Virtual Collections: An Earth Science Data Curation Service
NASA Technical Reports Server (NTRS)
Bugbee, Kaylin; Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick
2016-01-01
The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility, and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of the time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karcher, Sandra; Willighagen, Egon L.; Rumble, John
Many groups within the broad field of nanoinformatics are already developing data repositories and analytical tools driven by their individual organizational goals. Integrating these data resources across disciplines and with non-nanotechnology resources can support multiple objectives by enabling the reuse of the same information. Integration can also serve as the impetus for novel scientific discoveries by providing the framework to support deeper data analyses. This article discusses current data integration practices in nanoinformatics and in comparable mature fields, and nanotechnology-specific challenges impacting data integration. Based on results from a nanoinformatics-community-wide survey, recommendations for achieving integration of existing operational nanotechnology resourcesmore » are presented. Nanotechnology-specific data integration challenges, if effectively resolved, can foster the application and validation of nanotechnology within and across disciplines. This paper is one of a series of articles by the Nanomaterial Data Curation Initiative that address data issues such as data curation workflows, data completeness and quality, curator responsibilities, and metadata.« less
Saccharomyces genome database informs human biology.
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael
2018-01-04
The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and to provide this wealth of information to advance, in many ways, research on other organisms, even those as evolutionarily distant as humans. To build such a bridge between biological kingdoms, SGD is curating data regarding yeast-human complementation, in which a human gene can successfully replace the function of a yeast gene, and/or vice versa. These data are manually curated from published literature, made available for download, and incorporated into a variety of analysis tools provided by SGD. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
CDD/SPARCLE: functional classification of proteins via subfamily domain architectures.
Marchler-Bauer, Aron; Bo, Yu; Han, Lianyi; He, Jane; Lanczycki, Christopher J; Lu, Shennan; Chitsaz, Farideh; Derbyshire, Myra K; Geer, Renata C; Gonzales, Noreen R; Gwadz, Marc; Hurwitz, David I; Lu, Fu; Marchler, Gabriele H; Song, James S; Thanki, Narmada; Wang, Zhouxi; Yamashita, Roxanne A; Zhang, Dachuan; Zheng, Chanjuan; Geer, Lewis Y; Bryant, Stephen H
2017-01-04
NCBI's Conserved Domain Database (CDD) aims at annotating biomolecular sequences with the location of evolutionarily conserved protein domain footprints, and functional sites inferred from such footprints. An archive of pre-computed domain annotation is maintained for proteins tracked by NCBI's Entrez database, and live search services are offered as well. CDD curation staff supplements a comprehensive collection of protein domain and protein family models, which have been imported from external providers, with representations of selected domain families that are curated in-house and organized into hierarchical classifications of functionally distinct families and sub-families. CDD also supports comparative analyses of protein families via conserved domain architectures, and a recent curation effort focuses on providing functional characterizations of distinct subfamily architectures using SPARCLE: Subfamily Protein Architecture Labeling Engine. CDD can be accessed at https://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Hydronephrosis does not preclude curative resection of pelvic recurrences after colorectal surgery.
Henry, Leonard R; Sigurdson, Elin; Ross, Eric; Hoffman, John P
2005-10-01
In one third of patients who die of rectal cancer, a pelvic recurrence after resection represents isolated disease for which re-resection may provide cure. These extensive resections can carry high morbidity. Proper patient selection is desirable but difficult. Hydronephrosis has been documented previously to portend a poor prognosis, and some consider it a contraindication to attempted resection. It was our goal to review our experience and either confirm or refute these conclusions. We performed a retrospective analysis of 90 patients resected with curative intent for pelvic recurrence at our center from 1988 through 2003. Seventy-one records documented the preoperative presence or absence of hydronephrosis. Clinical and pathologic data were recorded. The groups with and without hydronephrosis were compared. There were 15 patients with hydronephrosis in this study and 56 without. Although patients with hydronephrosis had shorter overall survival, disease-free survival, and rate of local control, none of these differences was statistically significant. Patients in the hydronephrosis group were younger and had higher-stage primary tumors and larger recurrent tumors. Subsequently, they underwent more extensive resections and were more likely to be treated with adjuvant therapies. There was no difference in the rate of margin-negative resections between the groups. Hydronephrosis correlates with younger patients with larger recurrent tumors undergoing more extensive operations and multimodality therapy but does not preclude curative (R0) resection or independently affect overall survival, disease-free survival, or local control. We believe that it should not be considered a contraindication to attempting curative resection.
Alasmary, Fatmah A S; Awaad, Amani S; Alafeefy, Ahmed M; El-Meligy, Reham M; Alqasoumi, Saleh I
2018-01-01
Two novel quinazoline derivatives named as; 3-[(4-hydroxy-3-methoxy-benzylidene)-amino]-2- p- tolyl-3 H -quinazolin-4-one ( 5 ) and 2- p -Tolyl-3-[3,4,5-trimethoxy-benzylidene-amino]-3 H -quinazolin-4-one ( 6 ) in addition to one acetamide derivative named as 2-(2-Hydroxycarbonylphenylamino)- N -(4-aminosulphonylphenyl) 11 were synthesized, and evaluated for their anti-ulcerogenic & Anti-Ulcerative colitis activities. All of the three compounds showed curative activity against acetic acid induced ulcer model at a dose of 50 mg/kg, they produced 65%, 85% & 57.74% curative ratio for compounds 5 , 6 & 11 respectively. The effect of the tested compounds 5 , 6 & 11 at dose 50 mg/kg were significantly (P < 0.01) more effective than dexamesathone (0.1 mg/kg) in reducing all parameters. Compounds showed curative activity of for peptic ulcer (induced by absolute alcohol (at a dose of 50 mg/kg, it produced Curative of control ulcer 56.00%, 61.70% & 87.1% for compounds 5 , 6 & 11 respectively at dose 50 mg/kg, while the standard drug (Omeprazole 20 mg/kg) produced 33.3%. In both tests, the activity of our target compounds were higher than the standard drugs used for treatment of peptic ulcer and ulcerative colitis. No side effects were reported on liver and kidney functions upon prolonged oral administration of this compounds.
O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D
2016-01-04
The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Toward Lower Organic Environments in Astromaterial Sample Curation for Diverse Collections
NASA Technical Reports Server (NTRS)
Allton, J. H.; Allen, C. C.; Burkett, P. J.; Calaway, M. J.; Oehler, D. Z.
2012-01-01
Great interest was taken during the frenzied pace of the Apollo lunar sample return to achieve and monitor organic cleanliness. Yet, the first mission resulted in higher organic contamination to samples than desired. But improvements were accomplished by Apollo 12 [1]. Quarantine complicated the goal of achieving organic cleanliness by requiring negative pressure glovebox containment environments, proximity of animal, plant and microbial organic sources, and use of organic sterilants in protocols. A special low organic laboratory was set up at University of California Berkeley (UCB) to cleanly subdivide a subset of samples [2, 3, 4]. Nevertheless, the basic approach of handling rocks and regolith inside of a positive pressure stainless steel glovebox and restrict-ing the tool and container materials allowed in the gloveboxes was established by the last Apollo sample re-turn. In the last 40 years, the collections have grown to encompass Antarctic meteorites, Cosmic Dust, Genesis solar wind, Stardust comet grains and Hayabusa asteroid grains. Each of these collections have unique curation requirements for organic contamination monitor-ing and control. Here is described some changes allowed by improved technology or driven by changes in environmental regulations and economy, concluding with comments on organic witness wafers. Future sample return missions (OSIRIS-Rex; Mars; comets) will require extremely low levels of organic contamination in spacecraft collection and thus similarly low levels in curation. JSC Curation is undertaking a program to document organic baseline levels in current operations and devise ways to reduce those levels.
Lucidi, Valerio; Hendlisz, Alain; Van Laethem, Jean-Luc; Donckier, Vincent
2016-04-21
In oncosurgical approach to colorectal liver metastases, surgery remains considered as the only potentially curative option, while chemotherapy alone represents a strictly palliative treatment. However, missing metastases, defined as metastases disappearing after chemotherapy, represent a unique model to evaluate the curative potential of chemotherapy and to challenge current therapeutic algorithms. We reviewed recent series on missing colorectal liver metastases to evaluate incidence of this phenomenon, predictive factors and rates of cure defined by complete pathologic response in resected missing metastases and sustained clinical response when they were left unresected. According to the progresses in the efficacy of chemotherapeutic regimen, the incidence of missing liver metastases regularly increases these last years. Main predictive factors are small tumor size, low marker level, duration of chemotherapy, and use of intra-arterial chemotherapy. Initial series showed low rates of complete pathologic response in resected missing metastases and high recurrence rates when unresected. However, recent reports describe complete pathologic responses and sustained clinical responses reaching 50%, suggesting that chemotherapy could be curative in some cases. Accordingly, in case of missing colorectal liver metastases, the classical recommendation to resect initial tumor sites might have become partially obsolete. Furthermore, the curative effect of chemotherapy in selected cases could lead to a change of paradigm in patients with unresectable liver-only metastases, using intensive first-line chemotherapy to intentionally induce missing metastases, followed by adjuvant surgery on remnant chemoresistant tumors and close surveillance of initial sites that have been left unresected.
Abdou, Rania H.; Saleh, Sherif Y.; Khalil, Waleed F.
2015-01-01
Background: Recently, many efforts have been made to discover new products of natural origin which can limit the xenobiotic-induced hepatic injury. Carbon tetrachloride (CCl4) is a highly toxic chemical that is widely used to study hepatotoxicity in animal models. Objective: The present study was conducted to investigate the curative and protective effects of Schinus terbenthifolius ethanolic extract against CCl4 -induced acute hepatotoxicity in rats. Materials and Methods: S. terbenthifolius extract was orally administered in a dose of 350 mg dried extract/kg b.wt. before and after intoxication with CCl4 for curative and protective experiments, respectively. A group of hepatotoxicity indicative enzymes, oxidant-antioxidant capacity, DNA oxidation, and apoptosis markers were measured. Results: CCl4 increased liver enzyme leakage, oxidative stress, hepatic apoptosis, DNA oxidation, and inflammatory markers. Administration of S. terebinthifolius, either before or after CCl4 intoxication, significantly decreased elevated serum liver enzymes and reinstated the antioxidant capacity. Interestingly, S. terebinthifolius extract inhibited hepatocyte apoptosis as revealed by approximately 20 times down-regulation in caspase-3 expression when compared to CCl4 untreated group. On the other hand, there was neither protective nor curative effect of S. terebinthifolius against DNA damage caused by CCl4. Conclusion: The present study suggests that S. terebinthifolius extract could be a substantially promising hepatoprotective agent against CCl4 toxic effects and may be against other hepatotoxic chemical or drugs. PMID:26109780
NASA Curation Preparation for Ryugu Sample Returned by JAXA's Hayabusa2 Mission
NASA Technical Reports Server (NTRS)
Nakamura-Messenger, Keiko; Righter, Kevin; Snead, Christopher J.; McCubbin, Francis M.; Pace, Lisa F.; Zeigler, Ryan A.; Evans, Cindy
2017-01-01
The NASA OSIRIS-REx and JAXA Hayabusa2 missions to near-Earth asteroids Bennu and Ryugu share similar mission goals of understanding the origins of primitive, organic-rich asteroids. Under an agreement between JAXA and NASA, there is an on-going and productive collaboration between science teams of Hayabusa2 and OSIRIS-REx missions. Under this agreement, a portion of each of the returned sample masses will be exchanged between the agencies and the scientific results of their study will be shared. NASA’s portion of the returned Hayabusa2 sample, consisting of 10% of the returned mass, will be jointly separated by NASA and JAXA. The sample will be legally and physically transferred to NASA’s dedicated Hayabusa2 curation facility at Johnson Space Center (JSC) no later than one year after the return of the Hayabusa2 sample to Earth (December 2020). The JSC Hayabusa2 curation cleanroom facility design has now been completed. In the same manner, JAXA will receive 0.5% of the total returned OSIRIS-REx sample (minimum required sample to return 60 g, maximum sample return capacity of 2 kg) from the rest of the specimen. No later than one year after the return of the OSIRIS-REx sample to Earth (September 2023), legal, physical, and permanent custody of this sample subset will be transferred to JAXA, and the sample subset will be brought to JAXA’s Extraterrestrial Sample Curation Center (ESCuC) at Institute of Space and Astronautical Science, Sagamihara City Japan.
Jeong, Soo Cheol; Aikata, Hiroshi; Katamura, Yoshio; Azakami, Takahiro; Kawaoka, Tomokazu; Saneto, Hiromi; Uka, Kiminori; Mori, Nami; Takaki, Shintaro; Kodama, Hideaki; Waki, Koji; Imamura, Michio; Shirakawa, Hiroo; Kawakami, Yoshiiku; Takahashi, Shoichi; Chayama, Kazuaki
2007-01-01
AIM: To assess whether a 24-wk course of interferon (IFN) could prevent hepatocellular carcinoma (HCC) recurrence and worsening of liver function in patients with hepatitis C virus (HCV)-infected patients after receiving curative treatment for primary HCC. METHODS: Outcomes in 42 patients with HCV infection treated with IFN-α, after curative treatment for primary HCC (IFN group), were compared with 42 matched curatively treated historical controls not given IFN (non-IFN group). RESULTS: Although the rate of initial recurrence did not differ significantly between IFN group and non-IFN group (0%, 44%, 61%, and 67% vs 4.8%, 53%, 81%, and 87% at 1, 3, 5, and 7 years, P = 0.153, respectively), IFN group showed a lower rate than the non-IFN group for second recurrence (0%, 10.4%, 28%, and 35% vs 0%, 30%, 59%, and 66% at 1, 3, 5 and 7 years, P = 0.022, respectively). Among the IFN group, patients with sustained virologic response (SVR) were less likely to have a second HCC recurrence than IFN patients without an SVR, or non-IFN patients. Multivariate analysis identified the lack of SVR as the only independent risk factor for a second recurrence, while SVR and Child-Pugh class A independently favored overall survival. CONCLUSION: Most intrahepatic recurrences of HCV-related HCC occurred during persistent viral infection. Eradication of HCV is essential for the prevention of HCC recurrence and improvement of survival. PMID:17879404
Passot, Guillaume; Vaudoyer, Delphine; Cotte, Eddy; You, Benoit; Isaac, Sylvie; Noël Gilly, François; Mohamed, Faheez; Glehen, Olivier
2012-07-01
The objective of this retrospective study was to evaluate the influence of neoadjuvant systemic chemotherapy on patients with colorectal carcinomatosis before a curative procedure. Peritoneal carcinomatosis (PC) from colorectal cancer may be treated with a curative intent by cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC). The role of perioperative systemic chemotherapy for this particular metastatic disease remains unclear. One hundred twenty patients with PC from colorectal cancer were consecutively treated by 131 procedures combining CRS with HIPEC. The response to neoadjuvant systemic chemotherapy was assessed on data from previous explorative surgery and/or radiological imaging. Ninety patients (75%) were treated with neoadjuvant systemic chemotherapy in whom 32 (36%) were considered to have responded, 19 (21%) had stable disease, and 19 (21%) developed diseases progression. Response could not be evaluated in 20 patients (22%). On univariate analysis, the use of neoadjuvant systemic chemotherapy had a significant positive prognostic influence (P = 0.042). On multivariate analysis, the completeness of CRS and the use of adjuvant systemic chemotherapy were the only significant prognostic factors (P < 0.001 and P = 0.049, respectively). Response to neoadjuvant systemic chemotherapy had no significant prognostic impact with median survival of 31.4 months in patients showing disease progression. In patients with PC from colorectal cancer without extraperitoneal metastases, failure of neoadjuvant systemic chemotherapy should not constitute an absolute contraindication to a curative procedure combining CRS and HIPEC.
Dataset of breath research manuscripts curated using PubMed search strings from 1995-2016.
Geer Wallace, M Ariel; Pleil, Joachim D
2018-06-01
The data contained in this article are PubMed search strings and search string builders used to curate breath research manuscripts published from 1995-2016 and the respective number of articles found that satisfied the search requirements for selected categories. Breath sampling represents a non-invasive technique that has gained usefulness for public health, clinical, diagnostic, and environmental exposure assessment applications over the years. This data article includes search strings that were utilized to retrieve publications through the PubMed database for different breath research-related topics that were related to the analysis of exhaled breath, exhaled breath condensate (EBC), and exhaled breath aerosol (EBA) as well as the analysis of cellular headspace. Manuscripts were curated for topics including EBC, EBA, Direct MS, GC-MS, LC-MS, alcohol, and sensors. A summary of the number of papers published per year for the data retrieved using each of the search strings is also included. These data can be utilized to discern trends in the number of breath research publications in each of the different topics over time. A supplementary Appendix A containing the titles, author lists, journal names, publication dates, PMID numbers, and EntrezUID numbers for each of the journal articles curated using the finalized search strings for the seven breath research-related topics can also be found within this article. The selected manuscripts can be used to explore the impact that breath research has had on expanding the scientific knowledge in each of the investigated topics.
Okamura, Yukiyasu; Ashida, Ryo; Yamamoto, Yusuke; Ito, Takaaki; Sugiura, Teiichi; Bekku, Emima; Aramaki, Takeshi; Uesaka, Katsuhiko
2016-03-01
The aspartate aminotransferase to platelet ratio index (APRI) and fibrosis-4 (FIB-4) index were developed as a non-invasive parameter for predicting liver fibrosis. This study aimed to validate the APRI and FIB-4 indexes in patients treated with curative therapy for non-B non-C (NBNC) hepatocellular carcinoma (HCC). Accumulated database comprising 399 patients who underwent hepatectomy was reviewed retrospectively. Analyses were performed to evaluate whether the APRI and FIB-4 indexes are predictors of liver cirrhosis and/or the prognosis in patients with NBNC-HCC. Forty-seven patients with NBNC-HCC who underwent curative radiofrequency ablation therapy (RFA) in the same period were enrolled as the validation set. The APRI and FIB-4 indexes were significantly higher in the cirrhosis group than in the no-cirrhosis group (P = 0.001 and P < 0.001, respectively). A receiver operating characteristic curve analysis showed that the FIB-4 index was more accurate in predicting background liver cirrhosis than the APRI. According to a multivariate analysis, an FIB-4 index larger than 2.7 (hazard ratio 2.11 and 2.21, 95 % confidence interval 1.06-4.18 and 1.38-3.54, P = 0.033 and P = 0.001) remained significant independent predictors of overall and recurrence-free survival, respectively. The present findings showed that the FIB-4 index is a significant predictor of background liver cirrhosis and the prognosis after curative resection for NBNB-HCC.
Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio
2014-02-01
The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.
Reconstruction of Tissue-Specific Metabolic Networks Using CORDA
Schultz, André; Qutub, Amina A.
2016-01-01
Human metabolism involves thousands of reactions and metabolites. To interpret this complexity, computational modeling becomes an essential experimental tool. One of the most popular techniques to study human metabolism as a whole is genome scale modeling. A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues. Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment (CORDA) to build genome scale models in a tissue-specific manner. CORDA performs more efficiently computationally, shows better agreement to experimental data, and displays better model functionality and capacity when compared to previous algorithms. CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process. Using CORDA, we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions. These reconstructions identified which metabolic pathways are shared across diverse human tissues. Moreover, we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues, including up-regulation of folate metabolism, the down-regulation of thiamine metabolism, and tight regulation of oxidative phosphorylation. PMID:26942765
Diversion colitis and pouchitis: A mini-review
Tominaga, Kentaro; Kamimura, Kenya; Takahashi, Kazuya; Yokoyama, Junji; Yamagiwa, Satoshi; Terai, Shuji
2018-01-01
Diversion colitis is characterized by inflammation of the mucosa in the defunctioned segment of the colon after colostomy or ileostomy. Similar to diversion colitis, diversion pouchitis is an inflammatory disorder occurring in the ileal pouch, resulting from the exclusion of the fecal stream and a subsequent lack of nutrients from luminal bacteria. Although the vast majority of patients with surgically-diverted gastrointestinal tracts remain asymptomatic, it has been reported that diversion colitis and pouchitis might occur in almost all patients with diversion. Surgical closure of the stoma, with reestablishment of gut continuity, is the only curative intervention available for patients with diversion disease. Pharmacologic treatments using short-chain fatty acids, mesalamine, or corticosteroids are reportedly effective for those who are not candidates for surgical reestablishment; however, there are no established assessment criteria for determining the severity of diversion colitis, and no management strategies to date. Therefore, in this mini-review, we summarize and review various recently-reported treatments for diversion disease. We are hopeful that the information summarized here will assist physicians who treat patients with diversion colitis and pouchitis, leading to better case management. PMID:29713128