Mashima, Jun; Kodama, Yuichi; Fujisawa, Takatomo; Katayama, Toshiaki; Okuda, Yoshihiro; Kaminuma, Eli; Ogasawara, Osamu; Okubo, Kousaku; Nakamura, Yasukazu; Takagi, Toshihisa
2017-01-01
The DNA Data Bank of Japan (DDBJ) (http://www.ddbj.nig.ac.jp) has been providing public data services for thirty years (since 1987). We are collecting nucleotide sequence data from researchers as a member of the International Nucleotide Sequence Database Collaboration (INSDC, http://www.insdc.org), in collaboration with the US National Center for Biotechnology Information (NCBI) and European Bioinformatics Institute (EBI). The DDBJ Center also services Japanese Genotype-phenotype Archive (JGA), with the National Bioscience Database Center to collect human-subjected data from Japanese researchers. Here, we report our database activities for INSDC and JGA over the past year, and introduce retrieval and analytical services running on our supercomputer system and their recent modifications. Furthermore, with the Database Center for Life Science, the DDBJ Center improves semantic web technologies to integrate and to share biological data, for providing the RDF version of the sequence data. PMID:27924010
DNA Data Bank of Japan: 30th anniversary.
Kodama, Yuichi; Mashima, Jun; Kosuge, Takehide; Kaminuma, Eli; Ogasawara, Osamu; Okubo, Kousaku; Nakamura, Yasukazu; Takagi, Toshihisa
2018-01-04
The DNA Data Bank of Japan (DDBJ) Center (http://www.ddbj.nig.ac.jp) has been providing public data services for 30 years since 1987. We are collecting nucleotide sequence data and associated biological information from researchers as a member of the International Nucleotide Sequence Database Collaboration (INSDC), in collaboration with the US National Center for Biotechnology Information and the European Bioinformatics Institute. The DDBJ Center also services the Japanese Genotype-phenotype Archive (JGA) with the National Bioscience Database Center to collect genotype and phenotype data of human individuals. Here, we outline our database activities for INSDC and JGA over the past year, and introduce submission, retrieval and analysis services running on our supercomputer system and their recent developments. Furthermore, we highlight our responses to the amended Japanese rules for the protection of personal information and the launch of the DDBJ Group Cloud service for sharing pre-publication data among research groups. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Chen, Qingyu; Zobel, Justin; Verspoor, Karin
2017-01-01
GenBank, the EMBL European Nucleotide Archive and the DNA DataBank of Japan, known collectively as the International Nucleotide Sequence Database Collaboration or INSDC, are the three most significant nucleotide sequence databases. Their records are derived from laboratory work undertaken by different individuals, by different teams, with a range of technologies and assumptions and over a period of decades. As a consequence, they contain a great many duplicates, redundancies and inconsistencies, but neither the prevalence nor the characteristics of various types of duplicates have been rigorously assessed. Existing duplicate detection methods in bioinformatics only address specific duplicate types, with inconsistent assumptions; and the impact of duplicates in bioinformatics databases has not been carefully assessed, making it difficult to judge the value of such methods. Our goal is to assess the scale, kinds and impact of duplicates in bioinformatics databases, through a retrospective analysis of merged groups in INSDC databases. Our outcomes are threefold: (1) We analyse a benchmark dataset consisting of duplicates manually identified in INSDC-a dataset of 67 888 merged groups with 111 823 duplicate pairs across 21 organisms from INSDC databases - in terms of the prevalence, types and impacts of duplicates. (2) We categorize duplicates at both sequence and annotation level, with supporting quantitative statistics, showing that different organisms have different prevalence of distinct kinds of duplicate. (3) We show that the presence of duplicates has practical impact via a simple case study on duplicates, in terms of GC content and melting temperature. We demonstrate that duplicates not only introduce redundancy, but can lead to inconsistent results for certain tasks. Our findings lead to a better understanding of the problem of duplication in biological databases.Database URL: the merged records are available at https://cloudstor.aarnet.edu.au/plus/index.php/s/Xef2fvsebBEAv9w. © The Author(s) 2017. Published by Oxford University Press.
Chen, Qingyu; Zobel, Justin; Verspoor, Karin
2017-01-01
GenBank, the EMBL European Nucleotide Archive and the DNA DataBank of Japan, known collectively as the International Nucleotide Sequence Database Collaboration or INSDC, are the three most significant nucleotide sequence databases. Their records are derived from laboratory work undertaken by different individuals, by different teams, with a range of technologies and assumptions and over a period of decades. As a consequence, they contain a great many duplicates, redundancies and inconsistencies, but neither the prevalence nor the characteristics of various types of duplicates have been rigorously assessed. Existing duplicate detection methods in bioinformatics only address specific duplicate types, with inconsistent assumptions; and the impact of duplicates in bioinformatics databases has not been carefully assessed, making it difficult to judge the value of such methods. Our goal is to assess the scale, kinds and impact of duplicates in bioinformatics databases, through a retrospective analysis of merged groups in INSDC databases. Our outcomes are threefold: (1) We analyse a benchmark dataset consisting of duplicates manually identified in INSDC—a dataset of 67 888 merged groups with 111 823 duplicate pairs across 21 organisms from INSDC databases – in terms of the prevalence, types and impacts of duplicates. (2) We categorize duplicates at both sequence and annotation level, with supporting quantitative statistics, showing that different organisms have different prevalence of distinct kinds of duplicate. (3) We show that the presence of duplicates has practical impact via a simple case study on duplicates, in terms of GC content and melting temperature. We demonstrate that duplicates not only introduce redundancy, but can lead to inconsistent results for certain tasks. Our findings lead to a better understanding of the problem of duplication in biological databases. Database URL: the merged records are available at https://cloudstor.aarnet.edu.au/plus/index.php/s/Xef2fvsebBEAv9w PMID:28077566
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.
Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver
2010-06-30
Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission
2010-01-01
Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175
NCBI-compliant genome submissions: tips and tricks to save time and money.
Pirovano, Walter; Boetzer, Marten; Derks, Martijn F L; Smit, Sandra
2017-03-01
Genome sequences nowadays play a central role in molecular biology and bioinformatics. These sequences are shared with the scientific community through sequence databases. The sequence repositories of the International Nucleotide Sequence Database Collaboration (INSDC, comprising GenBank, ENA and DDBJ) are the largest in the world. Preparing an annotated sequence in such a way that it will be accepted by the database is challenging because many validation criteria apply. In our opinion, it is an undesirable situation that researchers who want to submit their sequence need either a lot of experience or help from partners to get the job done. To save valuable time and money, we list a number of recommendations for people who want to submit an annotated genome to a sequence database, as well as for tool developers, who could help to ease the process. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Kodama, Yuichi; Mashima, Jun; Kaminuma, Eli; Gojobori, Takashi; Ogasawara, Osamu; Takagi, Toshihisa; Okubo, Kousaku; Nakamura, Yasukazu
2012-01-01
The DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp) maintains and provides archival, retrieval and analytical resources for biological information. The central DDBJ resource consists of public, open-access nucleotide sequence databases including raw sequence reads, assembly information and functional annotation. Database content is exchanged with EBI and NCBI within the framework of the International Nucleotide Sequence Database Collaboration (INSDC). In 2011, DDBJ launched two new resources: the 'DDBJ Omics Archive' (DOR; http://trace.ddbj.nig.ac.jp/dor) and BioProject (http://trace.ddbj.nig.ac.jp/bioproject). DOR is an archival database of functional genomics data generated by microarray and highly parallel new generation sequencers. Data are exchanged between the ArrayExpress at EBI and DOR in the common MAGE-TAB format. BioProject provides an organizational framework to access metadata about research projects and the data from the projects that are deposited into different databases. In this article, we describe major changes and improvements introduced to the DDBJ services, and the launch of two new resources: DOR and BioProject.
O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D
2016-01-04
The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Enriching public descriptions of marine phages using the Genomic Standards Consortium MIGS standard
Duhaime, Melissa Beth; Kottmann, Renzo; Field, Dawn; Glöckner, Frank Oliver
2011-01-01
In any sequencing project, the possible depth of comparative analysis is determined largely by the amount and quality of the accompanying contextual data. The structure, content, and storage of this contextual data should be standardized to ensure consistent coverage of all sequenced entities and facilitate comparisons. The Genomic Standards Consortium (GSC) has developed the “Minimum Information about Genome/Metagenome Sequences (MIGS/MIMS)” checklist for the description of genomes and here we annotate all 30 publicly available marine bacteriophage sequences to the MIGS standard. These annotations build on existing International Nucleotide Sequence Database Collaboration (INSDC) records, and confirm, as expected that current submissions lack most MIGS fields. MIGS fields were manually curated from the literature and placed in XML format as specified by the Genomic Contextual Data Markup Language (GCDML). These “machine-readable” reports were then analyzed to highlight patterns describing this collection of genomes. Completed reports are provided in GCDML. This work represents one step towards the annotation of our complete collection of genome sequences and shows the utility of capturing richer metadata along with raw sequences. PMID:21677864
Assembly: a resource for assembled genomes at NCBI
Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi
2016-01-01
The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580
SCPortalen: human and mouse single-cell centric database
Noguchi, Shuhei; Böttcher, Michael; Hasegawa, Akira; Kouno, Tsukasa; Kato, Sachi; Tada, Yuhki; Ura, Hiroki; Abe, Kuniya; Shin, Jay W; Plessy, Charles; Carninci, Piero
2018-01-01
Abstract Published single-cell datasets are rich resources for investigators who want to address questions not originally asked by the creators of the datasets. The single-cell datasets might be obtained by different protocols and diverse analysis strategies. The main challenge in utilizing such single-cell data is how we can make the various large-scale datasets to be comparable and reusable in a different context. To challenge this issue, we developed the single-cell centric database ‘SCPortalen’ (http://single-cell.clst.riken.jp/). The current version of the database covers human and mouse single-cell transcriptomics datasets that are publicly available from the INSDC sites. The original metadata was manually curated and single-cell samples were annotated with standard ontology terms. Following that, common quality assessment procedures were conducted to check the quality of the raw sequence. Furthermore, primary data processing of the raw data followed by advanced analyses and interpretation have been performed from scratch using our pipeline. In addition to the transcriptomics data, SCPortalen provides access to single-cell image files whenever available. The target users of SCPortalen are all researchers interested in specific cell types or population heterogeneity. Through the web interface of SCPortalen users are easily able to search, explore and download the single-cell datasets of their interests. PMID:29045713
Standards and the INSDC: Submission of MIGS, MIMS, MIENS (GSC8 Meeting)
Mizrachi, Ilene
2017-12-21
The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding. Research Coordination Network from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. Ilene Mizrachi of the NCBI talks about submission of MIGS/MIMS/MIENS information at the Genomic Standards Consortium's 8th meeting at the DOE JGI in Walnut Creek, Calif. on Sept. 9, 2009.
Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A
2012-04-01
A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.
Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States
ERIC Educational Resources Information Center
Ivan, Ion; Ciurea, Cristian; Pavel, Sorin
2010-01-01
The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)
Facilitating Collaboration, Knowledge Construction and Communication with Web-Enabled Databases.
ERIC Educational Resources Information Center
McNeil, Sara G.; Robin, Bernard R.
This paper presents an overview of World Wide Web-enabled databases that dynamically generate Web materials and focuses on the use of this technology to support collaboration, knowledge construction, and communication. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and…
The Primate Life History Database: A unique shared ecological data resource
Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.
2011-01-01
Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066
A Toolkit for Active Object-Oriented Databases with Application to Interoperability
NASA Technical Reports Server (NTRS)
King, Roger
1996-01-01
In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object-oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.
A Toolkit for Active Object-Oriented Databases with Application to Interoperability
NASA Technical Reports Server (NTRS)
King, Roger
1996-01-01
In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object- oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.
A National Virtual Specimen Database for Early Cancer Detection
NASA Technical Reports Server (NTRS)
Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy
2003-01-01
Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system
Global and Local Collaborators: A Study of Scientific Collaboration.
ERIC Educational Resources Information Center
Pao, Miranda Lee
1992-01-01
Describes an empirical study that was conducted to examine the relationship among scientific co-authorship (i.e., collaboration), research funding, and productivity. Bibliographic records from the MEDLINE database that used the subject heading for schistosomiasis are analyzed, global and local collaborators are discussed, and scientific…
The Transformation of Schools' Social Networks during a Data-Based Decision Making Reform
ERIC Educational Resources Information Center
Keuning, Trynke
2016-01-01
Context: Collaboration within school teams is considered to be important to build the capacity school teams need to work in a data-based way. In a school characterized by a strong collaborative culture, teachers may have more access to the knowledge and skills for analyzing data, teachers have more opportunity to discuss the performance goals to…
Database integration of protocol-specific neurological imaging datasets
Pacurar, Emil E.; Sethi, Sean K.; Habib, Charbel; Laze, Marius O.; Martis-Laze, Rachel; Haacke, E. Mark
2016-01-01
For many years now, Magnetic Resonance Innovations (MR Innovations), a magnetic resonance imaging (MRI) software development, technology, and research company, has been aggregating a multitude of MRI data from different scanning sites through its collaborations and research contracts. The majority of the data has adhered to neuroimaging protocols developed by our group which has helped ensure its quality and consistency. The protocols involved include the study of: traumatic brain injury, extracranial venous imaging for multiple sclerosis and Parkinson's disease, and stroke. The database has proven invaluable in helping to establish disease biomarkers, validate findings across multiple data sets, develop and refine signal processing algorithms, and establish both public and private research collaborations. Myriad Masters and PhD dissertations have been possible thanks to the availability of this database. As an example of a project that cuts across diseases, we have used the data and specialized software to develop new guidelines for detecting cerebral microbleeds. Ultimately, the database has been vital in our ability to provide tools and information for researchers and radiologists in diagnosing their patients, and we encourage collaborations and welcome sharing of similar data in this database. PMID:25959660
Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps
Calvo, Iñaki
2014-01-01
Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. PMID:25538957
Basner, Jodi E; Theisz, Katrina I; Jensen, Unni S; Jones, C David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G; Franca-Koh, Jonathan; Hanlon, Sean E; Kuhn, Nastaran Z; Nagahara, Larry A; Schnell, Joshua D; Moore, Nicole M
2013-12-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences-Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program.
Vesco, Umberto; Knap, Nataša; Labruna, Marcelo B; Avšič-Županc, Tatjana; Estrada-Peña, Agustín; Guglielmone, Alberto A; Bechara, Gervasio H; Gueye, Arona; Lakos, Andras; Grindatto, Anna; Conte, Valeria; De Meneghi, Daniele
2011-05-01
Tick-borne zoonoses (TBZ) are emerging diseases worldwide. A large amount of information (e.g. case reports, results of epidemiological surveillance, etc.) is dispersed through various reference sources (ISI and non-ISI journals, conference proceedings, technical reports, etc.). An integrated database-derived from the ICTTD-3 project ( http://www.icttd.nl )-was developed in order to gather TBZ records in the (sub-)tropics, collected both by the authors and collaborators worldwide. A dedicated website ( http://www.tickbornezoonoses.org ) was created to promote collaboration and circulate information. Data collected are made freely available to researchers for analysis by spatial methods, integrating mapped ecological factors for predicting TBZ risk. The authors present the assembly process of the TBZ database: the compilation of an updated list of TBZ relevant for (sub-)tropics, the database design and its structure, the method of bibliographic search, the assessment of spatial precision of geo-referenced records. At the time of writing, 725 records extracted from 337 publications related to 59 countries in the (sub-)tropics, have been entered in the database. TBZ distribution maps were also produced. Imported cases have been also accounted for. The most important datasets with geo-referenced records were those on Spotted Fever Group rickettsiosis in Latin-America and Crimean-Congo Haemorrhagic Fever in Africa. The authors stress the need for international collaboration in data collection to update and improve the database. Supervision of data entered remains always necessary. Means to foster collaboration are discussed. The paper is also intended to describe the challenges encountered to assemble spatial data from various sources and to help develop similar data collections.
The EMBL nucleotide sequence database
Stoesser, Guenter; Baker, Wendy; van den Broek, Alexandra; Camon, Evelyn; Garcia-Pastor, Maria; Kanz, Carola; Kulikova, Tamara; Lombard, Vincent; Lopez, Rodrigo; Parkinson, Helen; Redaschi, Nicole; Sterk, Peter; Stoehr, Peter; Tuli, Mary Ann
2001-01-01
The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT. PMID:11125039
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Thomas, Roger E; Spragins, Wendy; Lorenzetti, Diane L
2013-12-16
To perform a systematic review of all serious adverse events (SAEs) after yellow fever vaccination and to assess them according to Brighton Collaboration criteria. Nine electronic databases were searched with the terms "yellow fever vaccine" and "adverse events" to 10 July 2013 (no language/date limits). Two reviewers independently assessed studies, entered data, and assessed cases with Brighton Collaboration criteria. One hundred and thirty-one cases met Brighton Collaboration criteria: 32 anaphylaxis, 41 neurologic (one death), 56 viscerotropic (24 deaths), and 2 both neurologic and viscerotropic criteria. All SAEs occurred following first yellow fever (YF) vaccination. Two additional cases which met Brighton Collaboration criteria were proven due to wild virus. An additional 345 cases were presented with insufficient detail to meet Brighton Collaboration criteria:173 neurological, 68 viscerotropic (24 deaths), 67 anaphylaxis, and 34 cases from a UK database and 3 from a Swiss database described as "serious adverse events" but not further classified into neurologic or viscerotropic. A further 253 cases were excluded as presenting insufficient data to be regarded as yellow fever vaccine (YFV) related SAEs. One hundred and thirty-one cases met Brighton Collaboration criteria for serious adverse events after yellow fever vaccination. Another 345 cases did not meet Brighton criteria and 253 were excluded as presenting insufficient data to be regarded as serious adverse events after YFV. There are likely to be cases in areas that are remote or with insufficient diagnostic resources that are neither correctly assessed nor not published. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gražulis, Saulius; Daškevič, Adriana; Merkys, Andrius; Chateigner, Daniel; Lutterotti, Luca; Quirós, Miguel; Serebryanaya, Nadezhda R.; Moeck, Peter; Downs, Robert T.; Le Bail, Armel
2012-01-01
Using an open-access distribution model, the Crystallography Open Database (COD, http://www.crystallography.net) collects all known ‘small molecule / small to medium sized unit cell’ crystal structures and makes them available freely on the Internet. As of today, the COD has aggregated ∼150 000 structures, offering basic search capabilities and the possibility to download the whole database, or parts thereof using a variety of standard open communication protocols. A newly developed website provides capabilities for all registered users to deposit published and so far unpublished structures as personal communications or pre-publication depositions. Such a setup enables extension of the COD database by many users simultaneously. This increases the possibilities for growth of the COD database, and is the first step towards establishing a world wide Internet-based collaborative platform dedicated to the collection and curation of structural knowledge. PMID:22070882
Yousefy, Alireza; Malekahmadi, Parisa
2013-01-01
Research is essential for development. In other words, scientific development of each country can be evaluated by researchers' scientific production. Understanding and assessing the activities of researchers for planning and policy making is essential. The significance of collaboration in the production of scientific publications in today's complex world where technology is everything is very apparent. Scientists realized that in order to get their work wildly used and cited to by experts, they must collaborate. The collaboration among researchers results in the development of scientific knowledge and hence, attainment of wider information. The main objective of this research is to survey scientific production and collaboration rate in philosophy and theoretical bases of medical library and information sciences in ISI, SCOPUS, and Pubmed databases during 2001-2010. This is a descriptive survey and scientometrics methods were used for this research. Then data gathered via check list and analyzed by the SPSS software. Collaboration rate was calculated according to the formula. Among the 294 related abstracts about philosophy, and theoretical bases of medical library and information science in ISI, SCOPUS, and Pubmed databases during 2001-2010, the year 2007 with 45 articles has the most and the year 2003 with 16 articles has the least number of related collaborative articles in this scope. "B. Hjorland" with eight collaborative articles had the most one among Library and Information Sciences (LIS) professionals in ISI, SCOPUS, and Pubmed. Journal of Documentation with 29 articles and 12 collaborative articles had the most related articles. Medical library and information science challenges with 150 articles had first place in number of articles. Results also show that the most elaborative country in terms of collaboration point of view and number of articles was US. "University of Washington" and "University Western Ontario" are the most elaborative affiliation from a collaboration point. The average collaboration rate between researchers in this field during the years studied is 0.25. The most completive reviewed articles are single authors that included 60.54% of the whole articles. Only 30.46% of articles were provided with two or more than two authors.
DEVELOPMENT AND APPLICATION OF THE DORIAN (DOSE-RESPONSE INFORMATION ANALYSIS) SYSTEM
A database application for wilderness character monitoring
Ashley Adams; Peter Landres; Simon Kingston
2012-01-01
The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...
A Web-based Tool for SDSS and 2MASS Database Searches
NASA Astrophysics Data System (ADS)
Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.
We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.
Basner, Jodi E.; Theisz, Katrina I.; Jensen, Unni S.; Jones, C. David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G.; Franca-Koh, Jonathan; Hanlon, Sean E.; Kuhn, Nastaran Z.; Nagahara, Larry A.; Schnell, Joshua D.; Moore, Nicole M.
2013-01-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences—Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program. PMID:24808632
Malignant tumors of the liver in children.
Aronson, Daniel C; Meyers, Rebecka L
2016-10-01
This article aims to give an overview of pediatric liver tumors; in particular of the two most frequently occurring groups of hepatoblastomas and hepatocellular carcinomas. Focus lays on achievements gained through worldwide collaboration. We present recent advances in insight, treatment results, and future questions to be asked. Increasing international collaboration between the four major Pediatric Liver Tumor Study Groups (SIOPEL/GPOH, COG, and JPLT) may serve as a paradigm to approach rare tumors. This international effort has been catalyzed by the Children's Hepatic tumor International Collaboration (CHIC) formation of a large collaborative database. Interrogation of this database has led to a new universal risk stratification system for hepatoblastoma using PRETEXT/POSTTEXT staging as a backbone. Pathologists in this international collaboration have established a new histopathological consensus classification for pediatric liver tumors. Concomitantly there have been advances in chemotherapy options, an increased role of liver transplantation for unresectable tumors, and a web portal system developed at www.siopel.org for international education, consultation, and collaboration. These achievements will be further tested and validated in the upcoming Paediatric Hepatic International Tumour Trial (PHITT). Copyright © 2016 Elsevier Inc. All rights reserved.
Lavin, Jennifer; Shah, Rahul; Greenlick, Hannah; Gaudreau, Philip; Bedwell, Joshua
2016-01-01
Given the low frequency of adverse events after tracheostomy, individual institutions struggle to collect outcome data to generate effective quality improvement protocols. The Global Tracheostomy Collaborative (GTC) is a multi-institutional, multi-disciplinary organization that utilizes a prospective database to collect data on patients undergoing tracheostomy. We describe our institution's preliminary experience with this collaborative. It was hypothesized that entry into the database would be non-burdensome and could be easily and accurately initiated by skilled specialists at the time of tracheostomy placement and completed at time of patient discharge. Demographic, diagnostic, and outcome data on children undergoing tracheostomy at our institution from January 2013 to June 2015 were entered into the GTC database, a database collected and managed by REDCap (Research Electronic Data Capture). All data entry was performed by pediatric otolaryngology fellows and all post-operative updates were completed by a skilled tracheostomy nurse. Tracked outcomes included accidental decannulation, failed decannulation, tracheostomy tube obstruction, bleeding/tracheoinnominate fistula, and tracheocutaneous fistula. Data from 79 patients undergoing tracheostomy at our institution were recorded. Database entry was straightforward and entry of patient demographic information, medical comorbidities, surgical indications, and date of tracheostomy placement was completed in less than 5min per patient. The most common indication for surgery was facilitation of ventilation in 65 patients (82.3%). Average time from admission to tracheostomy was 62.6 days (range 0-246). Stomal breakdown was seen in 1 patient. A total of 72 patients were tracked to hospital discharge with 53 patients surviving (88.3%). No mortalities were tracheostomy-related. The Global Tracheostomy Collaborative is a multi-institutional, multi-disciplinary collaborative that collects data on patients undergoing tracheostomy. Our experience proves proof of concept of entering demographics and outcome data into the GTC database in a manner that was both accurate and not burdensome to those participating in data entry. In our tertiary care, pediatric academic medical center, tracheostomy continues to be a safe procedure with no major tracheostomy-related morbidities occurring in this patient population involvement with the GTC has shown opportunities for improvement in communication and coordination with other tracheostomy-related disciplines. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...
International contributions to IAEA-NEA heat transfer databases for supercritical fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, L. K. H.; Yamada, K.
2012-07-01
An IAEA Coordinated Research Project on 'Heat Transfer Behaviour and Thermohydraulics Code Testing for SCWRs' is being conducted to facilitate collaboration and interaction among participants from 15 organizations. While the project covers several key technology areas relevant to the development of SCWR concepts, it focuses mainly on the heat transfer aspect, which has been identified as the most challenging. Through the collaborating effort, large heat-transfer databases have been compiled for supercritical water and surrogate fluids in tubes, annuli, and bundle subassemblies of various orientations over a wide range of flow conditions. Assessments of several supercritical heat-transfer correlations were performed usingmore » the complied databases. The assessment results are presented. (authors)« less
Collaborative care for depression in European countries: a systematic review and meta-analysis.
Sighinolfi, Cecilia; Nespeca, Claudia; Menchetti, Marco; Levantesi, Paolo; Belvederi Murri, Martino; Berardi, Domenico
2014-10-01
This is a systematic review and meta-analysis of randomized controlled trials (RCTs) investigating the effectiveness of collaborative care compared to Primary Care Physician's (PCP's) usual care in the treatment of depression, focusing on European countries. A systematic review of English and non-English articles, from inception to March 2014, was performed using database PubMed, British Nursing Index and Archive, Ovid Medline (R), PsychINFO, Books@Ovid, PsycARTICLES Full Text, EMBASE Classic+Embase, DARE (Database of Abstract of Reviews of Effectiveness) and the Cochrane Library electronic database. Search term included depression, collaborative care, physician family and allied health professional. RCTs comparing collaborative care to usual care for depression in primary care were included. Titles and abstracts were independently examined by two reviewers, who extracted from the included trials information on participants' characteristics, type of intervention, features of collaborative care and type of outcome measure. The 17 papers included, regarding 15 RCTs, involved 3240 participants. Primary analyses showed that collaborative care models were associated with greater improvement in depression outcomes in the short term, within 3 months (standardized mean difference (SMD) -0.19, 95% CI=-0.33; -0.05; p=0.006), medium term, between 4 and 11 months (SMD -0.24, 95% CI=-0.39; -0.09; p=0.001) and medium-long term, from 12 months and over (SMD -0.21, 95% CI=-0.37; -0.04; p=0.01), compared to usual care. The present review, specifically focusing on European countries, shows that collaborative care is more effective than treatment as usual in improving depression outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Cavaleri, Piero
2008-01-01
Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks. Findings:…
[Experience and present situation of Western China Gastric Cancer Collaboration].
Hu, Jiankun; Zhang, Weihan; Western China Gastric Cancer Collaboration, China
2017-03-25
The Western China Gastric Cancer Collaboration (WCGCC) was founded in Chongqing, China in 2011. At the early stage of the collaboration, there were only about 20 centers. While now, there are 36 centers from western area of China, including Sichuan, Chongqing, Yunnan, Shanxi, Guizhou, Gansu, Qinghai, Xinjiang, Ningxia and Tibet. During the past few years, the WCGCC organized routinely gastric cancer standardized treatment tours, training courses of mini-invasive surgical treatment of gastric cancer and the clinical research methodology for members of the collaboration. Meanwhile, the WCGCC built a multicenter database of gastric cancer since 2011 and the entering and management refer to national gastric cancer registration entering system of Japan Gastric Cancer Association. During the entering and collection of data, 190 items of data have unified definition and entering standard from Japan Gastric Cancer Guidelines. Nowadays, this database included about 11 872 gastric cancer cases, and in this paper we will introduce the initial results of these cases. Next, the collaboration will conduct some retrospective studies based on this database to analyze the clinicopathological characteristics of patients in the western area of China. Besides, the WCGCC performed a prospective study, also. The first randomized clinical trial of the collaboration aims to compare the postoperative quality of life between different reconstruction methods for total gastrectomy(WCGCC-1202, ClinicalTrials.gov Identifier: NCT02110628), which began in 2015, and now this study is in the recruitment period. In the next steps, we will improve the quality of the database, optimize the management processes. Meanwhile, we will engage in more exchanges and cooperation with the Chinese Cochrane Center, reinforce the foundation of the clinical trials research methodology. In aspect of standardized surgical treatment of gastric cancer, we will further strengthen communication with other international centers in order to improve both the treatment and research levels of gastric cancer in Western China.
Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B
2000-01-01
The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.
2014-12-01
Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.
NASA Aeronautics and Space Database for bibliometric analysis
NASA Technical Reports Server (NTRS)
Powers, R.; Rudman, R.
2004-01-01
The authors use the NASA Aeronautics and Space Database to perform bibliometric analysis of citations. This paper explains their research methodology and gives some sample results showing collaboration trends between NASA Centers and other institutions.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
NASA Astrophysics Data System (ADS)
Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal
2005-11-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
Victorine, J.; Watney, W.L.; Bhattacharya, S.
2005-01-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.
MetPetDB: A database for metamorphic geochemistry
NASA Astrophysics Data System (ADS)
Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather
2009-12-01
We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.
NASA Astrophysics Data System (ADS)
Willmes, C.
2017-12-01
In the frame of the Collaborative Research Centre 806 (CRC 806) an interdisciplinary research project, that needs to manage data, information and knowledge from heterogeneous domains, such as archeology, cultural sciences, and the geosciences, a collaborative internal knowledge base system was developed. The system is based on the open source MediaWiki software, that is well known as the software that enables Wikipedia, for its facilitation of a web based collaborative knowledge and information management platform. This software is additionally enhanced with the Semantic MediaWiki (SMW) extension, that allows to store and manage structural data within the Wiki platform, as well as it facilitates complex query and API interfaces to the structured data stored in the SMW data base. Using an additional open source software called mobo, it is possible to improve the data model development process, as well as automated data imports, from small spreadsheets to large relational databases. Mobo is a command line tool that helps building and deploying SMW structure in an agile, Schema-Driven Development way, and allows to manage and collaboratively develop the data model formalizations, that are formalized in JSON-Schema format, using version control systems like git. The combination of a well equipped collaborative web platform facilitated by Mediawiki, the possibility to store and query structured data in this collaborative database provided by SMW, as well as the possibility for automated data import and data model development enabled by mobo, result in a powerful but flexible system to build and develop a collaborative knowledge base system. Furthermore, SMW allows the application of Semantic Web technology, the structured data can be exported into RDF, thus it is possible to set a triple-store including a SPARQL endpoint on top of the database. The JSON-Schema based data models, can be enhanced into JSON-LD, to facilitate and profit from the possibilities of Linked Data technology.
BioMart: a data federation framework for large collaborative projects.
Zhang, Junjun; Haider, Syed; Baran, Joachim; Cros, Anthony; Guberman, Jonathan M; Hsu, Jack; Liang, Yong; Yao, Long; Kasprzyk, Arek
2011-01-01
BioMart is a freely available, open source, federated database system that provides a unified access to disparate, geographically distributed data sources. It is designed to be data agnostic and platform independent, such that existing databases can easily be incorporated into the BioMart framework. BioMart allows databases hosted on different servers to be presented seamlessly to users, facilitating collaborative projects between different research groups. BioMart contains several levels of query optimization to efficiently manage large data sets and offers a diverse selection of graphical user interfaces and application programming interfaces to ensure that queries can be performed in whatever manner is most convenient for the user. The software has now been adopted by a large number of different biological databases spanning a wide range of data types and providing a rich source of annotation available to bioinformaticians and biologists alike.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
Global Collaboration Enhances Technology Literacy
ERIC Educational Resources Information Center
Cook, Linda A.; Bell, Meredith L.; Nugent, Jill; Smith, Walter S.
2016-01-01
Today's learners routinely use technology outside of school to communicate, collaborate, and gather information about the world around them. Classroom learning experiences are relevant when they include communication technologies such as social networking, blogging, and video conferencing, and information technologies such as databases, browsers,…
We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...
Factors Influencing Error Recovery in Collections Databases: A Museum Case Study
ERIC Educational Resources Information Center
Marty, Paul F.
2005-01-01
This article offers an analysis of the process of error recovery as observed in the development and use of collections databases in a university museum. It presents results from a longitudinal case study of the development of collaborative systems and practices designed to reduce the number of errors found in the museum's databases as museum…
de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo
2010-02-19
Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
Concept Similarity in Publications Precedes Cross-disciplinary Collaboration
Post, Andrew R.; Harrison, James H.
2008-01-01
Innovative science frequently occurs as a result of cross-disciplinary collaboration, the importance of which is reflected by recent NIH funding initiatives that promote communication and collaboration. If shared research interests between collaborators are important for the formation of collaborations, methods for identifying these shared interests across scientific domains could potentially reveal new and useful collaboration opportunities. MEDLINE represents a comprehensive database of collaborations and research interests, as reflected by article co-authors and concept content. We analyzed six years of citations using information retrieval-based methods to compute articles’ conceptual similarity, and found that articles by basic and clinical scientists who later collaborated had significantly higher average similarity than articles by similar scientists who did not collaborate. Refinement of these methods and characterization of found conceptual overlaps could allow automated discovery of collaboration opportunities that are currently missed. PMID:18999254
Concept similarity in publications precedes cross-disciplinary collaboration.
Post, Andrew R; Harrison, James H
2008-11-06
Innovative science frequently occurs as a result of cross-disciplinary collaboration, the importance of which is reflected by recent NIH funding initiatives that promote communication and collaboration. If shared research interests between collaborators are important for the formation of collaborations,methods for identifying these shared interests across scientific domains could potentially reveal new and useful collaboration opportunities. MEDLINE represents a comprehensive database of collaborations and research interests, as reflected by article co-authors and concept content. We analyzed six years of citations using information retrieval based methods to compute articles conceptual similarity, and found that articles by basic and clinical scientists who later collaborated had significantly higher average similarity than articles by similar scientists who did not collaborate.Refinement of these methods and characterization of found conceptual overlaps could allow automated discovery of collaboration opportunities that are currently missed.
Bookey-Bassett, Sue; Markle-Reid, Maureen; Mckey, Colleen A; Akhtar-Danesh, Noori
2017-01-01
To report a concept analysis of interprofessional collaboration in the context of chronic disease management, for older adults living in communities. Increasing prevalence of chronic disease among older adults is creating significant burden for patients, families and healthcare systems. Managing chronic disease for older adults living in the community requires interprofessional collaboration across different health and other care providers, organizations and sectors. However, there is a lack of consensus about the definition and use of interprofessional collaboration for community-based chronic disease management. Concept analysis. Electronic databases CINAHL, Medline, HealthStar, EMBASE, PsychINFO, Ageline and Cochrane Database were searched from 2000 - 2013. Rodgers' evolutionary method for concept analysis. The most common surrogate term was interdisciplinary collaboration. Related terms were interprofessional team, multidisciplinary team and teamwork. Attributes included: an evolving interpersonal process; shared goals, decision-making and care planning; interdependence; effective and frequent communication; evaluation of team processes; involving older adults and family members in the team; and diverse and flexible team membership. Antecedents comprised: role awareness; interprofessional education; trust between team members; belief that interprofessional collaboration improves care; and organizational support. Consequences included impacts on team composition and function, care planning processes and providers' knowledge, confidence and job satisfaction. Interprofessional collaboration is a complex evolving concept. Key components of interprofessional collaboration in chronic disease management for community-living older adults are identified. Implications for nursing practice, education and research are proposed. © 2016 John Wiley & Sons Ltd.
Evolving the US Army Research Laboratory (ARL) Technical Communication Strategy
2016-10-01
of added value and enhanced tech transfer, and strengthened relationships with academic and industry collaborators. In support of increasing ARL’s...communication skills; and Prong 3: Promote a Stakeholder Database to implement a stakeholder database (including names and preferences) and use a...Group, strategic planning, communications strategy, stakeholder database , workforce improvement, science and technology, S&T 16. SECURITY
Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D
2016-04-01
The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.
Data Mining Research with the LSST
NASA Astrophysics Data System (ADS)
Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.
2007-12-01
The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.
Collaboration systems for classroom instruction
NASA Astrophysics Data System (ADS)
Chen, C. Y. Roger; Meliksetian, Dikran S.; Chang, Martin C.
1996-01-01
In this paper we discuss how classroom instruction can benefit from state-of-the-art technologies in networks, worldwide web access through Internet, multimedia, databases, and computing. Functional requirements for establishing such a high-tech classroom are identified, followed by descriptions of our current experimental implementations. The focus of the paper is on the capabilities of distributed collaboration, which supports both synchronous multimedia information sharing as well as a shared work environment for distributed teamwork and group decision making. Our ultimate goal is to achieve the concept of 'living world in a classroom' such that live and dynamic up-to-date information and material from all over the world can be integrated into classroom instruction on a real-time basis. We describe how we incorporate application developments in a geography study tool, worldwide web information retrievals, databases, and programming environments into the collaborative system.
A Communication Framework for Collaborative Defense
2009-02-28
been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a fraction of...perceived. We have been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a...that are well understood in the context of databases . These techniques allow users to quickly scan for the existence of a key in a database . 8 To be
BRISK--research-oriented storage kit for biology-related data.
Tan, Alan; Tripp, Ben; Daley, Denise
2011-09-01
In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.
Collaborative Processes in Species Identification Using an Internet-Based Taxonomic Resource
ERIC Educational Resources Information Center
Kontkanen, Jani; Kärkkäinen, Sirpa; Dillon, Patrick; Hartikainen-Ahia, Anu; Åhlberg, Mauri
2016-01-01
Visual databases are increasingly important resources through which individuals and groups can undertake species identification. This paper reports research on the collaborative processes undertaken by pre-service teacher students when working in small groups to identify birds using an Internet-based taxonomic resource. The student groups are…
Teleconferencing Technology Facilitates Collaboration. Spotlight Feature
ERIC Educational Resources Information Center
Dopke-Wilson, MariRae
2006-01-01
Big, comprehensive projects involving multiple teachers, components, and electronic media can daunt the most ambitious educator. But for Library Media Specialist Bonnie French, big projects are no problem! A pioneer SOS database contributor, Bonnie can be aptly dubbed the "queen of collaboration." In this article, the author discusses how Bonnie…
Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun
2015-01-01
The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.
Planned and ongoing projects (pop) database: development and results.
Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice
2014-11-01
The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, Everett P.
Collaborations are critical to the science, technology, and engineering achievements at Los Alamos National Laboratory (LANL). This report analyzed the collaborations as measured through peer-reviewed publications from the Web of Science (WoS) database for LANL for the 1990 – 2015 period. Both a cumulative analysis over the entire time period and annual analyses were performed. The results found that the Department of Energy national laboratories, University of California campuses, and other academic institutions collaborate with LANL on regular basis. Results provide insights into trends in peer-reviewed papers collaborations for LANL.
Capturing Qualitative Data: Northwestern University Special Libraries' Acknowledgments Database
ERIC Educational Resources Information Center
Stigberg, Sara; Guittar, Michelle; Morse, Geoffrey
2015-01-01
Assessment and supporting data have become of increasing interest in librarianship. In this paper, we describe the development and implementation of the Northwestern University Library Acknowledgments Database tool, which gathers and documents qualitative data, as well as its component reporting function. This collaborative project and resulting…
TOPSAN: a dynamic web database for structural genomics.
Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John
2011-01-01
The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.
Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain
2013-06-06
With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.
2013-01-01
Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665
ERIC Educational Resources Information Center
Chen, Bodong; Resendes, Monica; Chai, Ching Sing; Hong, Huang-Yao
2017-01-01
As collaborative learning is actualized through evolving dialogues, temporality inevitably matters for the analysis of collaborative learning. This study attempts to uncover sequential patterns that distinguish "productive" threads of knowledge-building discourse. A database of Grade 1-6 knowledge-building discourse was first coded for…
Factors Promoting and Hindering Data-Based Decision Making in Schools
ERIC Educational Resources Information Center
Schildkamp, Kim; Poortman, Cindy; Luyten, Hans; Ebbeler, Johanna
2017-01-01
Although data-based decision making can lead to improved student achievement, data are often not used effectively in schools. This paper therefore focuses on conditions for effective data use. We studied the extent to which school organizational characteristics, data characteristics, user characteristics, and collaboration influenced data use for…
Development of the Community Health Improvement Navigator Database of Interventions.
Roy, Brita; Stanojevich, Joel; Stange, Paul; Jiwani, Nafisa; King, Raymond; Koo, Denise
2016-02-26
With the passage of the Patient Protection and Affordable Care Act, the requirements for hospitals to achieve tax-exempt status include performing a triennial community health needs assessment and developing a plan to address identified needs. To address community health needs, multisector collaborative efforts to improve both health care and non-health care determinants of health outcomes have been the most effective and sustainable. In 2015, CDC released the Community Health Improvement Navigator to facilitate the development of these efforts. This report describes the development of the database of interventions included in the Community Health Improvement Navigator. The database of interventions allows the user to easily search for multisector, collaborative, evidence-based interventions to address the underlying causes of the greatest morbidity and mortality in the United States: tobacco use and exposure, physical inactivity, unhealthy diet, high cholesterol, high blood pressure, diabetes, and obesity.
Development of the Community Health Improvement Navigator Database of Interventions
Roy, Brita; Stanojevich, Joel; Stange, Paul; Jiwani, Nafisa; King, Raymond; Koo, Denise
2016-01-01
Summary With the passage of the Patient Protection and Affordable Care Act, the requirements for hospitals to achieve tax-exempt status include performing a triennial community health needs assessment and developing a plan to address identified needs. To address community health needs, multisector collaborative efforts to improve both health care and non–health care determinants of health outcomes have been the most effective and sustainable. In 2015, CDC released the Community Health Improvement Navigator to facilitate the development of these efforts. This report describes the development of the database of interventions included in the Community Health Improvement Navigator. The database of interventions allows the user to easily search for multisector, collaborative, evidence-based interventions to address the underlying causes of the greatest morbidity and mortality in the United States: tobacco use and exposure, physical inactivity, unhealthy diet, high cholesterol, high blood pressure, diabetes, and obesity. PMID:26917110
The ATLAS conditions database architecture for the Muon spectrometer
NASA Astrophysics Data System (ADS)
Verducci, Monica; ATLAS Muon Collaboration
2010-04-01
The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.
Collaborative learning: A next step in the training of peer support providers.
Cronise, Rita
2016-09-01
This column explores how peer support provider training is enhanced through collaborative learning. Collaborative learning is an approach that draws upon the "real life" experiences of individual learners and encompasses opportunities to explore varying perspectives and collectively construct solutions that enrich the practice of all participants. This description draws upon published articles and examples of collaborative learning in training and communities of practice of peer support providers. Similar to person-centered practices that enhance the recovery experience of individuals receiving services, collaborative learning enhances the experience of peer support providers as they explore relevant "real world" issues, offer unique contributions, and work together toward improving practice. Three examples of collaborative learning approaches are provided that have resulted in successful collaborative learning opportunities for peer support providers. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Linked Data-Based Collaborative Annotation System for Increasing Learning Achievements
ERIC Educational Resources Information Center
Zarzour, Hafed; Sellami, Mokhtar
2017-01-01
With the emergence of the Web 2.0, collaborative annotation practices have become more mature in the field of learning. In this context, several recent studies have shown the powerful effects of the integration of annotation mechanism in learning process. However, most of these studies provide poor support for semantically structured resources,…
Collaborative and Cooperative Learning Techniques. Learning Package No. 6.
ERIC Educational Resources Information Center
Compton, Joe; Smith, Carl, Comp.
Originally developed for the Department of Defense Schools (DoDDS) system, this learning package on collaborative and cooperative learning techniques is designed for teachers who wish to upgrade or expand their teaching skills on their own. The package includes a comprehensive search of the ERIC database; a lecture giving an overview on the topic;…
ERIC Educational Resources Information Center
Eagle, John W.; Dowd-Eagle, Shannon E.; Snyder, Andrew; Holtzman, Elizabeth Gibbons
2015-01-01
Current educational reform mandates the implementation of school-based models for early identification and intervention, progress monitoring, and data-based assessment of student progress. This article provides an overview of interdisciplinary collaboration for systems-level consultation within a Multi-Tiered System of Support (MTSS) framework.…
Examination of Studies on Technology-Assisted Collaborative Learning Published between 2010-2014
ERIC Educational Resources Information Center
Arnavut, Ahmet; Özdamli, Fezile
2016-01-01
This study is a content analysis of the articles about technology-assisted collaborative learning published in Science Direct database between the years of 2010 and 2014. Developing technology has become a topic that we encounter in every aspect of our lives. Educators deal with the contribution and integration of technology into education.…
About BTTC | Center for Cancer Research
About Combined Forces Drive BTTC The Brain Tumor Trials Collaborative (BTTC) was created in 2003 - a combined effort of many professionals, entities and organizations to help those suffering from brain tumors. The National Cancer Institute's (NCI) Center for Cancer Research serves as the lead institution and provides the administrative infrastructure, clinical database and oversight for the collaborative.
Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew
2012-10-01
The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.
CALINVASIVES: a revolutionary tool to monitor invasive threats
M. Garbelotto; S. Drill; C. Powell; J. Malpas
2017-01-01
CALinvasives is a web-based relational database and content management system (CMS) cataloging the statewide distribution of invasive pathogens and pests and the plant hosts they impact. The database has been developed as a collaboration between the Forest Pathology and Mycology Laboratory at UC Berkeley and Calflora. CALinvasives will combine information on the...
Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.
ERIC Educational Resources Information Center
Filby, A. M. Iliana
1996-01-01
Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…
[Interagency collaboration in Spanish scientific production in nursing: social network analysis].
Almero-Canet, Amparo; López-Ferrer, Mayte; Sales-Orts, Rafael
2013-01-01
The objectives of this paper are to analyze the Spanish scientific production in nursing, define its temporal evolution, its geographical and institutional distribution, and observe the interinstitutional collaboration. We analyze a comprehensive sample of Spanish scientific production in the nursing area extracted from the multidisciplinary database SciVerse Scopus. The nursing scientific production grows along time. The collaboration rate is 3.7 authors per paper and 61% of the authors only publish one paper. Barcelona and Madrid are the provinces with highest number of authors. Most belong to the hospitalary environment, followed closely by authors belonging to the university. The most institutions that collaborate, sharing authorship of articles are: University of Barcelona, Autonomous University of Barcelona and Clinic Hospital of Barcelona. The nursing scientific production has been increasing since her incorporation at the university. The collaboration rate found is higher than found for other papers. It shows a low decrease of occasional authors. It discusses the outlook of scientific collaboration in nursing in Spain, at the level of institutions by co-authorship of papers, through a network graph. It observes their distribution, importance and interactions or lack thereof. There is a strong need to use international databases for research, care and teaching, in addition to the national specialized information resources. Professionals are encouraged to normalization of the paper's signature, both, surnames and institutions to which they belong. It confirms the limited cooperation with foreign institutions, although there is an increasing trend of collaboration between Spanish authors in this discipline. It is observed, clearly defined three interinstitutional collaboration patterns. Copyright © 2012 Elsevier España, S.L. All rights reserved.
The Danish Microbiology Database (MiBa) 2010 to 2013.
Voldstedlund, M; Haarh, M; Mølbak, K
2014-01-09
The Danish Microbiology Database (MiBa) is a national database that receives copies of reports from all Danish departments of clinical microbiology. The database was launched in order to provide healthcare personnel with nationwide access to microbiology reports and to enable real-time surveillance of communicable diseases and microorganisms. The establishment and management of MiBa has been a collaborative process among stakeholders, and the present paper summarises lessons learned from this nationwide endeavour which may be relevant to similar projects in the rapidly changing landscape of health informatics.
International forensic automotive paint database
NASA Astrophysics Data System (ADS)
Bishea, Gregory A.; Buckle, Joe L.; Ryland, Scott G.
1999-02-01
The Technical Working Group for Materials Analysis (TWGMAT) is supporting an international forensic automotive paint database. The Federal Bureau of Investigation and the Royal Canadian Mounted Police (RCMP) are collaborating on this effort through TWGMAT. This paper outlines the support and further development of the RCMP's Automotive Paint Database, `Paint Data Query'. This cooperative agreement augments and supports a current, validated, searchable, automotive paint database that is used to identify make(s), model(s), and year(s) of questioned paint samples in hit-and-run fatalities and other associated investigations involving automotive paint.
Final Report of the Mid-Atlantic Marine Wildlife Surveys, Modeling, and Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saracino-Brown, Jocelyn; Smith, Courtney; Gilman, Patrick
The Wind Program hosted a two-day workshop on July 24-25, 2012 with scientists and regulators engaged in marine ecological survey, modeling, and database efforts pertaining to the waters of the Mid-Atlantic region. The workshop was planned by Federal agency, academic, and private partners to promote collaboration between ongoing offshore ecological survey efforts, and to promote the collaborative development of complementary predictive models and compatible databases. The meeting primarily focused on efforts to establish and predict marine mammal, seabird, and sea turtle abundance, density, and distributions extending from the shoreline to the edge of the Exclusive Economic Zone between Nantucket Sound,more » Massachusetts and Cape Hatteras, North Carolina.« less
Indian Renewable Energy and Energy Efficiency Policy Database (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushe, S.
2013-09-01
This fact sheet provides an overview of the Indian Renewable Energy and Energy Efficiency Policy Database (IREEED) developed in collaboration by the United States Department of Energy and India's Ministry of New and Renewable Energy. IREEED provides succinct summaries of India's central and state government policies and incentives related to renewable energy and energy efficiency. The online, public database was developed under the U.S.- India Energy Dialogue and the Clean Energy Solution Center.
A Collaborative Recommend Algorithm Based on Bipartite Community
Fu, Yuchen; Liu, Quan; Cui, Zhiming
2014-01-01
The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393
GhaedAmini, Hossein; Okhovati, Maryam; Zare, Morteza; Saghafi, Zahra; Bazrafshan, Azam; GhaedAmini, Alireza; GhaedAmini, Mohammadreza
2016-05-01
The aim of this study was to provide research and collaboration overview of Iranian research efforts in the field of traditional medicine during 2010-2014. This is a bibliometric study using the Scopus database as data source, using search affiliation address relevant to traditional medicine and Iran as the search strategy. Subject and geographical overlay maps were also applied to visualize the network activities of the Iranian authors. Highly cited articles (citations >10) were further explored to highlight the impact of research domains more specifically. About 3,683 articles were published by Iranian authors in Scopus database. The compound annual growth rate of Iranian publications was 0.14% during 2010-2014. Tehran University of Medical Sciences (932 articles), Shiraz University of Medical Sciences (404 articles) and Tabriz Islamic Medical University (391 articles), were the leading institutions in the field of traditional medicine. Medicinal plants (72%), digestive system's disease (21%), basics of traditional medicine (13%), mental disorders (8%) were the major research topics. United States (7%), Netherlands (3%), and Canada (2.6%) were the most important collaborators of Iranian authors. Iranian research efforts in the field of traditional medicine have been increased slightly over the last years. Yet, joint multi-disciplinary collaborations are needed to cover inadequately described areas of traditional medicine in the country.
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
Bedside Back to Bench: Building Bridges between Basic and Clinical Genomic Research.
Manolio, Teri A; Fowler, Douglas M; Starita, Lea M; Haendel, Melissa A; MacArthur, Daniel G; Biesecker, Leslie G; Worthey, Elizabeth; Chisholm, Rex L; Green, Eric D; Jacob, Howard J; McLeod, Howard L; Roden, Dan; Rodriguez, Laura Lyman; Williams, Marc S; Cooper, Gregory M; Cox, Nancy J; Herman, Gail E; Kingsmore, Stephen; Lo, Cecilia; Lutz, Cathleen; MacRae, Calum A; Nussbaum, Robert L; Ordovas, Jose M; Ramos, Erin M; Robinson, Peter N; Rubinstein, Wendy S; Seidman, Christine; Stranger, Barbara E; Wang, Haoyi; Westerfield, Monte; Bult, Carol
2017-03-23
Genome sequencing has revolutionized the diagnosis of genetic diseases. Close collaborations between basic scientists and clinical genomicists are now needed to link genetic variants with disease causation. To facilitate such collaborations, we recommend prioritizing clinically relevant genes for functional studies, developing reference variant-phenotype databases, adopting phenotype description standards, and promoting data sharing. Published by Elsevier Inc.
Bedside Back to Bench: Building Bridges between Basic and Clinical Genomic Research
Manolio, Teri A.; Fowler, Douglas M.; Starita, Lea M.; Haendel, Melissa A.; MacArthur, Daniel G.; Biesecker, Leslie G.; Worthey, Elizabeth; Chisholm, Rex L.; Green, Eric D.; Jacob, Howard J.; McLeod, Howard L.; Roden, Dan; Rodriguez, Laura Lyman; Williams, Marc S.; Cooper, Gregory M.; Cox, Nancy J.; Herman, Gail E.; Kingsmore, Stephen; Lo, Cecilia; Lutz, Cathleen; MacRae, Calum A.; Nussbaum, Robert L.; Ordovas, Jose M.; Ramos, Erin M.; Robinson, Peter N.; Rubinstein, Wendy S.; Seidman, Christine; Stranger, Barbara E.; Wang, Haoyi; Westerfield, Monte; Bult, Carol
2017-01-01
Summary Genome sequencing has revolutionized the diagnosis of genetic diseases. Close collaborations between basic scientists and clinical genomicists are now needed to link genetic variants with disease causation. To facilitate such collaborations we recommend prioritizing clinically relevant genes for functional studies, developing reference variant-phenotype databases, adopting phenotype description standards, and promoting data sharing. PMID:28340351
Data harmonization and federated analysis of population-based studies: the BioSHaRE project
2013-01-01
Abstracts Background Individual-level data pooling of large population-based studies across research centres in international research projects faces many hurdles. The BioSHaRE (Biobank Standardisation and Harmonisation for Research Excellence in the European Union) project aims to address these issues by building a collaborative group of investigators and developing tools for data harmonization, database integration and federated data analyses. Methods Eight population-based studies in six European countries were recruited to participate in the BioSHaRE project. Through workshops, teleconferences and electronic communications, participating investigators identified a set of 96 variables targeted for harmonization to answer research questions of interest. Using each study’s questionnaires, standard operating procedures, and data dictionaries, harmonization potential was assessed. Whenever harmonization was deemed possible, processing algorithms were developed and implemented in an open-source software infrastructure to transform study-specific data into the target (i.e. harmonized) format. Harmonized datasets located on server in each research centres across Europe were interconnected through a federated database system to perform statistical analysis. Results Retrospective harmonization led to the generation of common format variables for 73% of matches considered (96 targeted variables across 8 studies). Authenticated investigators can now perform complex statistical analyses of harmonized datasets stored on distributed servers without actually sharing individual-level data using the DataSHIELD method. Conclusion New Internet-based networking technologies and database management systems are providing the means to support collaborative, multi-center research in an efficient and secure manner. The results from this pilot project show that, given a strong collaborative relationship between participating studies, it is possible to seamlessly co-analyse internationally harmonized research databases while allowing each study to retain full control over individual-level data. We encourage additional collaborative research networks in epidemiology, public health, and the social sciences to make use of the open source tools presented herein. PMID:24257327
Havelin, Leif I; Robertsson, Otto; Fenstad, Anne M; Overgaard, Søren; Garellick, Göran; Furnes, Ove
2011-12-21
The Nordic (Scandinavian) countries have had working arthroplasty registers for several years. However, the small numbers of inhabitants and the conformity within each country with respect to preferred prosthesis brands and techniques have limited register research. A collaboration called NARA (Nordic Arthroplasty Register Association) was started in 2007, resulting in a common database for Denmark, Norway, and Sweden with regard to hip replacements in 2008 and primary knee replacements in 2009. Finland joined the project in 2010. A code set was defined for the parameters that all registers had in common, and data were re-coded, within each national register, according to the common definitions. After de-identification of the patients, the anonymous data were merged into a common database. The first study based on this common database included 280,201 hip arthroplasties and the second, 151,814 knee arthroplasties. Kaplan-Meier and Cox multiple regression analyses, with adjustment for age, sex, and diagnosis, were used to calculate prosthesis survival, with any revision as the end point. In later studies, specific reasons for revision were also used as end points. We found differences among the countries concerning patient demographics, preferred surgical approaches, fixation methods, and prosthesis brands. Prosthesis survival was best in Sweden, where cement implant fixation was used more commonly than it was in the other countries. As the comparison of national results was one of the main initial aims of this collaboration, only parameters and data that all three registers could deliver were included in the database. Compared with each separate register, this combined register resulted in reduced numbers of parameters and details. In future collaborations of registers with a focus on comparing the performances of prostheses and articulations, we should probably include only the data needed specifically for the predetermined purposes, from registers that can deliver these data, rather than compiling all data from all registers that are willing to participate.
USDA-ARS?s Scientific Manuscript database
For nearly 20 years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture’s (USDA) food composition databases through the collection and analysis of nationally representative food samples. This manuscript d...
Li, Huayan; Fuller, Jeffrey; Sun, Mei; Wang, Yong; Xu, Shuang; Feng, Hui
2014-11-01
To evaluate the situation for chronic disease management in China, and to seek the method for improving the collaborative management for chronic diseases in community. We searched literature between January 2008 and November 2013 from the Database, such as China Academic Journal Full-Text Database, and PubMed. The screening was strictly in accordance with the inclusion and exclusion criteria and a summary was made among the selected literature based on a collaboration model. We got 698 articles after rough screen and finally selected 33. All studies were involved in patient's self-management support, but only 9 studies mentioned the communication within the team, and 11 showed a clear team division of labor. Chronic disease community management in China displays some disadvantages. It really needs a general service team with clear roles and responsibilities for team members to improve the service ability of team members and provide patients with various forms of self management services.
Bachelor, Alexandra; Laverdière, Olivier; Gamache, Dominick; Bordeleau, Vincent
2007-06-01
To gain a closer understanding of client collaboration and its determinants, the first goal of this study involved the investigation of clients' perceptions of collaboration using a discovery-oriented methodology. Content analysis of 30 clients' written descriptions revealed three different modes of client collaboration, labeled active, mutual, and therapist-dependent, which emphasized client initiative and active participation, joint participation, and reliance on therapists' contributions to the work and change process, respectively. The majority of clients valued the therapist's active involvement and also emphasized the helpfulness of their collaborative experiences. In general, the therapist actions and attitudes involved in clients' views of good collaboration varied among clients. A second goal was to examine the relationships between client psychological functioning, quality of interpersonal relationships, and motivation, and clients' collaborative contributions, as rated by clients and therapists. Of these, only motivation was significantly associated with client collaboration, particularly in the perceptions of therapists. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Huamaní, Charles; Mayta-Tristán, Percy
2010-09-01
To describe the Peruvian scientific production in indexed journals in the Institute for Scientific Information (ISI) and the characteristics of the institutional collaborative networks. All papers published in the ISI database (Clinical Medicine collection) were included during 2000 to 2009 with at least one author with a Peruvian affiliation. The publication trend, address of corresponding author, type of article, institution, city (only for Peru), and country were evaluated. The collaborative networks were analized using the Pajek® software. 1210 papers were found, increasing from 61 in 2000 to 200 in 2009 (average of 121 articles/year). 30.4% articles included a corresponding author from a Peruvian institution. The average of authors per article was 8.3. Original articles represented 82.1% of total articles. Infectious diseases-related journals concentrated most of the articles. The main countries that collaborate with Peru are: USA (60.4%), England (12.9%), and Brazil (8.0%). Lima concentrated 94.7% of the publications and three regions (Huancavelica, Moquegua and Tacna) did not register any publication. Only two universities published more than one article/year and four institutions published more than 10 articles/year. Universidad Peruana Cayetano Heredia published 45% of the total number of articles, being the most productive institution and which concentrated the most number of collaborations with foreign institutions. The ministry of Health--including all dependencies--published 37.3% of the total number of publications. There is a higher level of collaboration with foreign institutions rather than local institutions. The Peruvian scientific production in medicine represented in the ISI database is very low but growing, and is concentrated in Lima and in a few institutions. The most productive Peruvian institutions collaborate more intensively with foreign journals rather than local institutions.
Databases for multilevel biophysiology research available at Physiome.jp.
Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki
2015-01-01
Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.
Nano-enabled drug delivery: a research profile.
Zhou, Xiao; Porter, Alan L; Robinson, Douglas K R; Shim, Min Suk; Guo, Ying
2014-07-01
Nano-enabled drug delivery (NEDD) systems are rapidly emerging as a key area for nanotechnology application. Understanding the status and developmental prospects of this area around the world is important to determine research priorities, and to evaluate and direct progress. Global research publication and patent databases provide a reservoir of information that can be tapped to provide intelligence for such needs. Here, we present a process to allow for extraction of NEDD-related information from these databases by involving topical experts. This process incorporates in-depth analysis of NEDD literature review papers to identify key subsystems and major topics. We then use these to structure global analysis of NEDD research topical trends and collaborative patterns, inform future innovation directions. This paper describes the process of how to derive nano-enabled drug delivery-related information from global research and patent databases in an effort to perform comprehensive global analysis of research trends and directions, along with collaborative patterns. Copyright © 2014 Elsevier Inc. All rights reserved.
Bridges, Sharon
2014-07-01
Collaboration in the healthcare setting is a multifaceted process that calls for deliberate knowledge sharing and mutual accountability for patient care. The purpose of this analysis is to offer an increased understanding of the concept of collaboration within the context of nurse practitioner (NP)-physician (MD) collaborative practice. The evolutionary method of concept analysis was utilized to explore the concept of collaboration. The process of literature retrieval and data collection was discussed. The search of several nursing and medicine databases resulted in 31 articles, including 17 qualitative and quantitative studies, which met criteria for inclusion in the concept analysis. Collaboration is a complex, sophisticated process that requires commitment of all parties involved. The data analysis identified the surrogate and related terms, antecedents, attributes, and consequences of collaboration within the selected context, which were recognized by major themes presented in the literature and these were discussed. An operational definition was proposed. Increasing collaborative efforts among NPs and MDs may reduce hospital length of stays and healthcare costs, while enhancing professional relationships. Further research is needed to evaluate collaboration and collaborative efforts within the context of NP-MD collaborative practice. ©2013 American Association of Nurse Practitioners.
Education and training column: the learning collaborative.
MacDonald-Wilson, Kim L; Nemec, Patricia B
2015-03-01
This column describes the key components of a learning collaborative, with examples from the experience of 1 organization. A learning collaborative is a method for management, learning, and improvement of products or processes, and is a useful approach to implementation of a new service design or approach. This description draws from published material on learning collaboratives and the authors' experiences. The learning collaborative approach offers an effective method to improve service provider skills, provide support, and structure environments to result in lasting change for people using behavioral health services. This approach is consistent with psychiatric rehabilitation principles and practices, and serves to increase the overall capacity of the mental health system by structuring a process for discovering and sharing knowledge and expertise across provider agencies. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco
2006-01-01
We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488
The risk of paradoxical embolism (RoPE) study: initial description of the completed database.
Thaler, David E; Di Angelantonio, Emanuele; Di Tullio, Marco R; Donovan, Jennifer S; Griffith, John; Homma, Shunichi; Jaigobin, Cheryl; Mas, Jean-Louis; Mattle, Heinrich P; Michel, Patrik; Mono, Marie-Luise; Nedeltchev, Krassen; Papetti, Federica; Ruthazer, Robin; Serena, Joaquín; Weimar, Christian; Elkind, Mitchell S V; Kent, David M
2013-12-01
Detecting a benefit from closure of patent foramen ovale in patients with cryptogenic stroke is hampered by low rates of stroke recurrence and uncertainty about the causal role of patent foramen ovale in the index event. A method to predict patent foramen ovale-attributable recurrence risk is needed. However, individual databases generally have too few stroke recurrences to support risk modeling. Prior studies of this population have been limited by low statistical power for examining factors related to recurrence. The aim of this study was to develop a database to support modeling of patent foramen ovale-attributable recurrence risk by combining extant data sets. We identified investigators with extant databases including subjects with cryptogenic stroke investigated for patent foramen ovale, determined the availability and characteristics of data in each database, collaboratively specified the variables to be included in the Risk of Paradoxical Embolism database, harmonized the variables across databases, and collected new primary data when necessary and feasible. The Risk of Paradoxical Embolism database has individual clinical, radiologic, and echocardiographic data from 12 component databases, including subjects with cryptogenic stroke both with (n = 1925) and without (n = 1749) patent foramen ovale. In the patent foramen ovale subjects, a total of 381 outcomes (stroke, transient ischemic attack, death) occurred (median follow-up 2·2 years). While there were substantial variations in data collection between studies, there was sufficient overlap to define a common set of variables suitable for risk modeling. While individual studies are inadequate for modeling patent foramen ovale-attributable recurrence risk, collaboration between investigators has yielded a database with sufficient power to identify those patients at highest risk for a patent foramen ovale-related stroke recurrence who may have the greatest potential benefit from patent foramen ovale closure. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Defending against Attribute-Correlation Attacks in Privacy-Aware Information Brokering
NASA Astrophysics Data System (ADS)
Li, Fengjun; Luo, Bo; Liu, Peng; Squicciarini, Anna C.; Lee, Dongwon; Chu, Chao-Hsien
Nowadays, increasing needs for information sharing arise due to extensive collaborations among organizations. Organizations desire to provide data access to their collaborators while preserving full control over the data and comprehensive privacy of their users. A number of information systems have been developed to provide efficient and secure information sharing. However, most of the solutions proposed so far are built atop of conventional data warehousing or distributed database technologies.
2008-11-24
ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public...for current usage. It now reads: “My organization has committed adequate budget and resources to interorganizational collaboration.” This statement ...Mean Item Standard Deviation My organization commits adequate human and financial resources to training with other organizations. 1 3.3 1.4 My
Blok, Amanda C
2017-04-01
To report an analysis of the concept of self-management behaviors. Self-management behaviors are typically associated with disease management, with frequent use by nurse researchers related to chronic illness management and by international health organizations for development of disease management interventions. A concept analysis was conducted within the context of Orem's self-care framework. Walker and Avant's eight-step concept analysis approach guided the analysis. Academic databases were searched for relevant literature including CIHAHL, Cochrane Databases of Systematic Reviews and Register of Controlled Trials, MEDLINE, PsycARTICLES and PsycINFO, and SocINDEX. Literature using the term "self-management behavior" and published between April 2001 and March 2015 was analyzed for attributes, antecedents, and consequences. A total of 189 journal articles were reviewed. Self-management behaviors are defined as proactive actions related to lifestyle, a problem, planning, collaborating, and mental support, as well as reactive actions related to a circumstantial change, to achieve a goal influenced by the antecedents of physical, psychological, socioeconomic, and cultural characteristics, as well as collaborative and received support. The theoretical definition and middle-range explanatory theory of self-management behaviors will guide future collaborative research and clinical practice for disease management. © 2016 Wiley Periodicals, Inc.
US Astronomers Access to SIMBAD in Strasbourg, France
NASA Technical Reports Server (NTRS)
Eichhorn, G.; Oliverson, Ronald J. (Technical Monitor)
2003-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 4300 US users registered. We also provided user support by answering questions from users and handling requests for lost passwords when still necessary. Even though almost all users now access SIMBAD without a password, based on hostnames/IP addresses, there are still some users that need individual passwords. We continued to maintain the mirror copy of the SIMBAD database on a server at SAO. This allows much faster access for the US users. During the past year we moved this mirror to a faster server to improve access for the US users. We again supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We provided support for the demonstration activities at the SIMBAD booth. We paid part of the fee for the SIMBAD demonstration. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers. We continue this collaboration in order to provide better services to both the US and European astronomical community. This collaboration is even more important in light of the developments for the Virtual Observatory projects in the different countries.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
Just-in-time Database-Driven Web Applications
2003-01-01
"Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109
Zink, Angela; Huscher, Dörte; Listing, Joachim
2003-01-01
The national database of the German Collaborative Arthritis Centres is a well-established tool for the observation and assessment of health care delivery to patients with rheumatic diseases in Germany. The discussion of variations in treatment practices contributes to the internal quality assessment in the participating arthritis centres. This documentation has shown deficits in primary health care including late referral to a rheumatologist, undertreatment with disease-modifying drugs and complementary therapies. In rheumatology, there is a trend towards early, intensive medical treatment including combination therapy. The frequency and length of inpatient hospital and rehabilitation treatments is decreasing, while active physiotherapy in outpatient care has been increased. Specific deficits have been identified concerning the provision of occupational therapy services and patient education.
Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals
Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A.; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A.; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R.R.; Kasukawa, Takeya; Kawaji, Hideya
2017-01-01
Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. PMID:27794045
Piehl, Janet H; Green, Sally; McDonald, Steve
2003-01-01
Background Despite the growing reputation and subject coverage of the Cochrane Database of Systematic Reviews, many systematic reviews continue to be published solely in paper-based health care journals. This study was designed to determine why authors choose to publish their systematic reviews outside of the Cochrane Collaboration and if they might be interested in converting their reviews to Cochrane format for publication in the Cochrane Database of Systematic Reviews. Methods Cross-sectional survey of Australian primary authors of systematic reviews not published on the Cochrane Database of Systematic Reviews identified from the Database of Abstracts of Reviews of Effectiveness. Results We identified 88 systematic reviews from the Database of Abstracts of Reviews of Effectiveness with an Australian as the primary author. We surveyed 52 authors for whom valid contact information was available. The response rate was 88 per cent (46/52). Ten authors replied without completing the survey, leaving 36 valid surveys for analysis. The most frequently cited reasons for not undertaking a Cochrane review were: lack of time (78%), the need to undergo specific Cochrane training (46%), unwillingness to update reviews (36%), difficulties with the Cochrane process (26%) and the review topic already registered with the Cochrane Collaboration (21%). (Percentages based on completed responses to individual questions.) Nearly half the respondents would consider converting their review to Cochrane format. Dedicated time emerged as the most important factor in facilitating the potential conversion process. Other factors included navigating the Cochrane system, assistance with updating and financial support. Eighty-six per cent were willing to have their review converted to Cochrane format by another author. Conclusion Time required to complete a Cochrane review and the need for specific training are the primary reasons why some authors publish systematic reviews outside of the Cochrane Collaboration. Encouragingly, almost half of the authors would consider converting their review to Cochrane format. Based on the current number of reviews in the Database of Abstracts of Reviews of Effectiveness, this could result in more than 700 additional Cochrane reviews. Ways of supporting these authors and how to provide dedicated time to convert systematic reviews needs further consideration. PMID:12533194
Moscucci, Mauro; Share, David; Kline-Rogers, Eva; O'Donnell, Michael; Maxwell-Eward, Ann; Meengs, William L; Clark, Vivian L; Kraft, Phillip; De Franco, Anthony C; Chambers, James L; Patel, Kirit; McGinnity, John G; Eagle, Kim A
2002-10-01
The past decade has been characterized by increased scrutiny of outcomes of surgical and percutaneous coronary interventions (PCIs). This increased scrutiny has led to the development of regional, state, and national databases for outcome assessment and for public reporting. This report describes the initial development of a regional, collaborative, cardiovascular consortium and the progress made so far by this collaborative group. In 1997, a group of hospitals in the state Michigan agreed to create a regional collaborative consortium for the development of a quality improvement program in interventional cardiology. The project included the creation of a comprehensive database of PCIs to be used for risk assessment, feedback on absolute and risk-adjusted outcomes, and sharing of information. To date, information from nearly 20,000 PCIs have been collected. A risk prediction tool for death in the hospital and additional risk prediction tools for other outcomes have been developed from the data collected, and are currently used by the participating centers for risk assessment and for quality improvement. As the project enters into year 5, the participating centers are deeply engaged in the quality improvement phase, and expansion to a total of 17 hospitals with active PCI programs is in process. In conclusion, the Blue Cross Blue Shield of Michigan Cardiovascular Consortium is an example of a regional collaborative effort to assess and improve quality of care and outcomes that overcome the barriers of traditional market and academic competition.
Zeni, Mary Beth
2012-03-01
The purpose of this study was to evaluate if paediatric asthma educational intervention studies included in the Cochrane Collaboration database incorporated concepts of health literacy. Inclusion criteria were established to identify review categories in the Cochrane Collaboration database specific to paediatric asthma educational interventions. Articles that met the inclusion criteria were selected from the Cochrane Collaboration database in 2010. The health literacy definition from Healthy People 2010 was used to develop a 4-point a priori rating scale to determine the extent a study reported aspects of health literacy in the development of an educational intervention for parents and/or children. Five Cochrane review categories met the inclusion criteria; 75 studies were rated for health literacy content regarding educational interventions with families and children living with asthma. A priori criteria were used for the rating process. While 52 (69%) studies had no information pertaining to health literacy, 23 (31%) reported an aspect of health literacy. Although all studies maintained the rigorous standards of randomized clinical trials, a model of health literacy was not reported regarding the design and implementation of interventions. While a more comprehensive health literacy model for the development of educational interventions with families and children may have been available after the reviewed studies were conducted, general literacy levels still could have been addressed. The findings indicate a need to incorporate health literacy in the design of client-centred educational interventions and in the selection criteria of relevant Cochrane reviews. Inclusion assures that health literacy is as important as randomization and statistical analyses in the research design of educational interventions and may even assure participation of people with literacy challenges. © 2012 The Author. International Journal of Evidence-Based Healthcare © 2012 The Joanna Briggs Institute.
Recent Efforts in Data Compilations for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Dillmann, Iris
2008-05-01
Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on ``Nuclear Physics Data Compilation for Nucleosynthesis Modeling'' held at the ECT* in Trento/Italy from May 29th-June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The ``JINA Reaclib Database'' on http://www.nscl.msu.edu/~nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS. The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections relevant to the s process. The present version v0.2 is already included in a REACLIB file from Basel university (http://download.nucastro.org/astro/reaclib). The present status of experimental stellar (n,γ) cross sections in KADoNiS is shown. It contains recommended cross sections for 355 isotopes between 1H and 210Bi, over 80% of them deduced from experimental data. A ``high priority list'' for measurements and evaluations for light charged-particle reactions set up by the JINA-CARINA collaboration is presented. The central web access point to submit and evaluate new data is provided by the Oak Ridge group via the http://www.nucastrodata.org homepage. ``Workflow tools'' aim to make the evaluation process transparent and allow users to follow the progress.
Active surveillance of postmarket medical product safety in the Federal Partners' Collaboration.
Robb, Melissa A; Racoosin, Judith A; Worrall, Chris; Chapman, Summer; Coster, Trinka; Cunningham, Francesca E
2012-11-01
After half a century of monitoring voluntary reports of medical product adverse events, the Food and Drug Administration (FDA) has launched a long-term project to build an adverse events monitoring system, the Sentinel System, which can access and evaluate electronic health care data to help monitor the safety of regulated medical products once they are marketed. On the basis of experience gathered through a number of collaborative efforts, the Federal Partners' Collaboration pilot project, involving FDA, the Centers for Medicare & Medicaid Services, the Department of Veteran Affairs, and the Department of Defense, is already enabling FDA to leverage the power of large public health care databases to assess, in near real time, the utility of analytical tools and methodologies that are being developed for use in the Sentinel System. Active medical product safety surveillance is enhanced by use of these large public health databases because specific populations of exposed patients can be identified and analyzed, and can be further stratified by key variables such as age, sex, race, socioeconomic status, and basis for eligibility to examine important subgroups.
Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones.
Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan
2015-07-20
This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications.
Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones
Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan
2015-01-01
This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications. PMID:26205269
Thermophysics Universal Research Framework (TURF) Tutorial Package
2017-03-02
replicate the functionality of Coliseum/HPHall, it must be emphasized that the TURF-IR is intended to stimulate academic collaboration and does not provide...of modules that replicate the functionality of Coliseum/HPHall, it must be emphasized that the TURF-IR is intended to stimulate academic collaboration...charge using an internal database of elements. Any values defined in M and Z will be ignored. The second method is to manually set the mass and charge via
Ackerman, Katherine V.; Mixon, David M.; Sundquist, Eric T.; Stallard, Robert F.; Schwarz, Gregory E.; Stewart, David W.
2009-01-01
The Reservoir Sedimentation Survey Information System (RESIS) database, originally compiled by the Soil Conservation Service (now the Natural Resources Conservation Service) in collaboration with the Texas Agricultural Experiment Station, is the most comprehensive compilation of data from reservoir sedimentation surveys throughout the conterminous United States (U.S.). The database is a cumulative historical archive that includes data from as early as 1755 and as late as 1993. The 1,823 reservoirs included in the database range in size from farm ponds to the largest U.S. reservoirs (such as Lake Mead). Results from 6,617 bathymetric surveys are available in the database. This Data Series provides an improved version of the original RESIS database, termed RESIS-II, and a report describing RESIS-II. The RESIS-II relational database is stored in Microsoft Access and includes more precise location coordinates for most of the reservoirs than the original database but excludes information on reservoir ownership. RESIS-II is anticipated to be a template for further improvements in the database.
Huamaní, Charles; González A, Gregorio; Curioso, Walter H; Pacheco-Romero, José
2012-04-01
International collaboration is increasingly used in biomedical research. To describe the characteristics of scientific production in Latin America and the main international collaboration networks for the period 2000 to 2009. Search for papers generated in Latin American countries in the Clinical Medicine database of ISI Web of Knowledge v.4.10 - Current Contents Connect. The country of origin of the corresponding author was considered the producing country of the paper. International collaboration was analyzed calculating the number of countries that contributed to the generation of a particular paper. Collaboration networks were graphed to determine the centrality of each network. Twelve Latin American countries participated in the production of 253,362 papers. The corresponding author was South American in 79% of these papers. Sixteen percent of papers were on clinical medicine and 36% of these were carried out in collaboration. Brazil had the highest production (22,442 papers) and the lower percentage of international collaboration (31%). North America accounts for 63% of collaborating countries. Only 8% of collaboration is between South American countries. Brazil has the highest tendency to collaborate with other South American countries. Brazil is the South American country with the highest scientific production and indicators of centrality in South America. The most common collaboration networks are with North American countries.
Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio
2014-02-01
The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.
Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio
2015-01-01
Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816
Databases applicable to quantitative hazard/risk assessment-Towards a predictive systems toxicology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waters, Michael; Jackson, Marcus
2008-11-15
The Workshop on The Power of Aggregated Toxicity Data addressed the requirement for distributed databases to support quantitative hazard and risk assessment. The authors have conceived and constructed with federal support several databases that have been used in hazard identification and risk assessment. The first of these databases, the EPA Gene-Tox Database was developed for the EPA Office of Toxic Substances by the Oak Ridge National Laboratory, and is currently hosted by the National Library of Medicine. This public resource is based on the collaborative evaluation, by government, academia, and industry, of short-term tests for the detection of mutagens andmore » presumptive carcinogens. The two-phased evaluation process resulted in more than 50 peer-reviewed publications on test system performance and a qualitative database on thousands of chemicals. Subsequently, the graphic and quantitative EPA/IARC Genetic Activity Profile (GAP) Database was developed in collaboration with the International Agency for Research on Cancer (IARC). A chemical database driven by consideration of the lowest effective dose, GAP has served IARC for many years in support of hazard classification of potential human carcinogens. The Toxicological Activity Profile (TAP) prototype database was patterned after GAP and utilized acute, subchronic, and chronic data from the Office of Air Quality Planning and Standards. TAP demonstrated the flexibility of the GAP format for air toxics, water pollutants and other environmental agents. The GAP format was also applied to developmental toxicants and was modified to represent quantitative results from the rodent carcinogen bioassay. More recently, the authors have constructed: 1) the NIEHS Genetic Alterations in Cancer (GAC) Database which quantifies specific mutations found in cancers induced by environmental agents, and 2) the NIEHS Chemical Effects in Biological Systems (CEBS) Knowledgebase that integrates genomic and other biological data including dose-response studies in toxicology and pathology. Each of the public databases has been discussed in prior publications. They will be briefly described in the present report from the perspective of aggregating datasets to augment the data and information contained within them.« less
EPA has multiple ways for the public to engage with the Agency's innovative solutions and technologies, including cooperative research and development agreements, internships, student competitions, and EPA databases developers can use to make mobile apps.
Prototype Packaged Databases and Software in Health
Gardenier, Turkan K.
1980-01-01
This paper describes the recent demand for packaged databases and software for health applications in light of developments in mini-and micro-computer technology. Specific features for defining prospective user groups are discussed; criticisms generated for large-scale epidemiological data use as a means of replacing clinical trials and associated controls are posed to the reader. The available collaborative efforts for access and analysis of jointly structured health data are stressed, with recommendations for new analytical techniques specifically geared to monitoring data such as the CTSS (Cumulative Transitional State Score) generated for tacking ongoing patient status over time in clinical trials. Examples of graphic display are given from the Domestic Information Display System (DIDS) which is a collaborative multi-agency effort to computerize and make accessible user-specified U.S. and local maps relating to health, environment, socio-economic and energy data.
Needed: Global Collaboration for Comparative Research on Cities and Health
Gusmano, Michael K.; Rodwin, Victor G.
2016-01-01
Over half of the world’s population lives in cities and United Nations (UN) demographers project an increase of 2.5 billion more urban dwellers by 2050. Yet there is too little systematic comparative research on the practice of urban health policy and management (HPAM), particularly in the megacities of middle-income and developing nations. We make a case for creating a global database on cities, population health and healthcare systems. The expenses involved in data collection would be difficult to justify without some review of previous work, some agreement on indicators worth measuring, conceptual and methodological considerations to guide the construction of the global database, and a set of research questions and hypotheses to test. We, therefore, address these issues in a manner that we hope will stimulate further discussion and collaboration. PMID:27694667
Needed: Global Collaboration for Comparative Research on Cities and Health.
Gusmano, Michael K; Rodwin, Victor G
2016-04-16
Over half of the world's population lives in cities and United Nations (UN) demographers project an increase of 2.5 billion more urban dwellers by 2050. Yet there is too little systematic comparative research on the practice of urban health policy and management (HPAM), particularly in the megacities of middle-income and developing nations. We make a case for creating a global database on cities, population health and healthcare systems. The expenses involved in data collection would be difficult to justify without some review of previous work, some agreement on indicators worth measuring, conceptual and methodological considerations to guide the construction of the global database, and a set of research questions and hypotheses to test. We, therefore, address these issues in a manner that we hope will stimulate further discussion and collaboration. © 2016 by Kerman University of Medical Sciences.
Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals.
Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R; Kasukawa, Takeya; Kawaji, Hideya
2017-01-04
Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud
2016-06-01
To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts.
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Collaborative Resource Allocation
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester
2007-01-01
Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.
Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud
2016-01-01
Background: To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. Objective: This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. Materials and Methods: In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. Results: In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Conclusion: Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts. PMID:27525320
Lee, Seohyun; Cho, Yoon-Min; Kim, Sun-Young
2017-08-22
Mobile health (mHealth), a term used for healthcare delivery via mobile devices, has gained attention as an innovative technology for better access to healthcare and support for performance of health workers in the global health context. Despite large expansion of mHealth across sub-Saharan Africa, regional collaboration for scale-up has not made progress since last decade. As a groundwork for strategic planning for regional collaboration, the study attempted to identify spatial patterns of mHealth implementation in sub-Saharan Africa using an exploratory spatial data analysis. In order to obtain comprehensive data on the total number of mHelath programs implemented between 2006 and 2016 in each of the 48 sub-Saharan Africa countries, we performed a systematic data collection from various sources, including: the WHO eHealth Database, the World Bank Projects & Operations Database, and the USAID mHealth Database. Additional spatial analysis was performed for mobile cellular subscriptions per 100 people to suggest strategic regional collaboration for improving mobile penetration rates along with the mHealth initiative. Global Moran's I and Local Indicator of Spatial Association (LISA) were calculated for mHealth programs and mobile subscriptions per 100 population to investigate spatial autocorrelation, which indicates the presence of local clustering and spatial disparities. From our systematic data collection, the total number of mHealth programs implemented in sub-Saharan Africa between 2006 and 2016 was 487 (same programs implemented in multiple countries were counted separately). Of these, the eastern region with 17 countries and the western region with 16 countries had 287 and 145 mHealth programs, respectively. Despite low levels of global autocorrelation, LISA enabled us to detect meaningful local clusters. Overall, the eastern part of sub-Saharan Africa shows high-high association for mHealth programs. As for mobile subscription rates per 100 population, the northern area shows extensive low-low association. This study aimed to shed some light on the potential for strategic regional collaboration for scale-up of mHealth and mobile penetration. Firstly, countries in the eastern area with much experience can take the lead role in pursuing regional collaboration for mHealth programs in sub-Saharan Africa. Secondly, collective effort in improving mobile penetration rates for the northern area is recommended.
Collaboration spotting for dental science.
Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A
2014-10-06
The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to Dental Science. In order to create a Sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro--maxillo--facial critical size defects, namely the use of Porous HydroxyApatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex--vivo of Mesenchymal Stem Cells. We produced the Sociograms for these technologies and the resulting maps are now accessible on--line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state--of--the--art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used for Dental Science and produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for Dental Science research.
Collaboration Spotting for oral medicine.
Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A
2014-09-01
The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to oral medicine. In order to create a sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro-maxillo-facial critical size defects, namely the use of porous hydroxyapatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex vivo of mesenchymal stem cells. We produced the sociograms for these technologies and the resulting maps are now accessible on-line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state-of-the-art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used in oral medicine as is produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for oral medicine research.
Evolution of Database Replication Technologies for WLCG
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca
2015-12-01
In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.
Methods for structuring scientific knowledge from many areas related to aging research.
Zhavoronkov, Alex; Cantor, Charles R
2011-01-01
Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
Ozcan, Sercan; Islam, Nazrul
2017-01-01
Many challenges still remain in the processing of explicit technological knowledge documents such as patents. Given the limitations and drawbacks of the existing approaches, this research sets out to develop an improved method for searching patent databases and extracting patent information to increase the efficiency and reliability of nanotechnology patent information retrieval process and to empirically analyse patent collaboration. A tech-mining method was applied and the subsequent analysis was performed using Thomson data analyser software. The findings show that nations such as Korea and Japan are highly collaborative in sharing technological knowledge across academic and corporate organisations within their national boundaries, and China presents, in some cases, a great illustration of effective patent collaboration and co-inventorship. This study also analyses key patent strengths by country, organisation and technology.
Ruohoalho, Johanna; Østvoll, Eirik; Bratt, Mette; Bugten, Vegard; Bäck, Leif; Mäkitie, Antti; Ovesen, Therese; Stalfors, Joacim
2018-06-01
Surgical quality registers provide tools to measure and improve the outcome of surgery. International register collaboration creates an opportunity to assess and critically evaluate national practices, and increases the size of available datasets. Even though millions of yearly tonsillectomies and tonsillotomies are performed worldwide, clinical practices are variable and inconsistency of evidence regarding the best clinical practice exists. The need for quality improvement actions is evident. We aimed to systematically investigate the existing tonsil surgery quality registers found in the literature, and to provide a thorough presentation of the planned Nordic Tonsil Surgery Register Collaboration. A systematic literature search of MEDLINE and EMBASE databases (from January 1990 to December 2016) was conducted to identify registers, databases, quality improvement programs or comprehensive audit programs addressing tonsil surgery. We identified two active registers and three completed audit programs focusing on tonsil surgery quality registration. Recorded variables were fairly similar, but considerable variation in coverage, number of operations included and length of time period for inclusion was discovered. Considering tonsillectomies and tonsillotomies being among the most commonly performed surgical procedures in otorhinolaryngology, it is surprising that only two active registers could be identified. We present a Nordic Tonsil Surgery Register Collaboration-an international tonsil surgery quality register project aiming to provide accurate benchmarks and enhance the quality of tonsil surgery in Denmark, Finland, Norway and Sweden.
Overview of NASA MSFC IEC Federated Engineering Collaboration Capability
NASA Technical Reports Server (NTRS)
Moushon, Brian; McDuffee, Patrick
2005-01-01
The MSFC IEC federated engineering framework is currently developing a single collaborative engineering framework across independent NASA centers. The federated approach allows NASA centers the ability to maintain diversity and uniqueness, while providing interoperability. These systems are integrated together in a federated framework without compromising individual center capabilities. MSFC IEC's Federation Framework will have a direct affect on how engineering data is managed across the Agency. The approach is directly attributed in response to the Columbia Accident Investigation Board (CAB) finding F7.4-11 which states the Space Shuttle Program has a wealth of data sucked away in multiple databases without a convenient way to integrate and use the data for management, engineering, or safety decisions. IEC s federated capability is further supported by OneNASA recommendation 6 that identifies the need to enhance cross-Agency collaboration by putting in place common engineering and collaborative tools and databases, processes, and knowledge-sharing structures. MSFC's IEC Federated Framework is loosely connected to other engineering applications that can provide users with the integration needed to achieve an Agency view of the entire product definition and development process, while allowing work to be distributed across NASA Centers and contractors. The IEC DDMS federation framework eliminates the need to develop a single, enterprise-wide data model, where the goal of having a common data model shared between NASA centers and contractors is very difficult to achieve.
Diabetes research in Middle East countries; a scientometrics study from 1990 to 2012.
Peykari, Niloofar; Djalalinia, Shirin; Kasaeian, Amir; Naderimagham, Shohreh; Hasannia, Tahereh; Larijani, Bagher; Farzadfar, Farshad
2015-03-01
Diabetes burden is a serious warning for urgent action plan across the world. Knowledge production in this context could provide evidences for more efficient interventions. Aimed to that, we quantify the trend of diabetes research outputs of Middle East countries focusing on the scientific publication numbers, citations, and international collaboration. This scientometrics study was performed based on the systematic analysis through three international databases; ISI, PubMed, and Scopus from 1990 to 2012. International collaboration of Middle East countries and citations was analyzed based on Scopus. Diabetes' publications in Iran specifically were assessed, and frequent used terms were mapped by VOSviewer software. Over 23-year period, the number of diabetes publications and related citations in Middle East countries had increasing trend. The number of articles on diabetes in ISI, PubMed, and Scopus were respectively; 13,994, 11,336, and 20,707. Turkey, Israel, Iran, Saudi Arabia, and Egypt have devoted the five top competition positions. In addition, Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Iran in all databases stands on third position and produced 12.7% of diabetes publications within region. Regarding diabetes researches, the frequent used terms in Iranian articles were "effect," "woman," and "metabolic syndrome." Ascending trend of diabetes research outputs in Middle East countries is appreciated but encouraging to strategic planning for maintaining this trend, and more collaboration between researchers is needed to regional health promotion.
A Collaboration Network Model Of Cytokine-Protein Network
NASA Astrophysics Data System (ADS)
Zou, Sheng-Rong; Zhou, Ta; Peng, Yu-Jing; Guo, Zhong-Wei; Gu, Chang-Gui; He, Da-Ren
2008-03-01
Complex networks provide us a new view for investigation of immune systems. We collect data through STRING database and present a network description with cooperation network model. The cytokine-protein network model we consider is constituted by two kinds of nodes, one is immune cytokine types which can be regarded as collaboration acts, the other one is protein type which can be regarded as collaboration actors. From act degree distribution that can be well described by typical SPL (shifted power law) functions [1], we find that HRAS, TNFRSF13C, S100A8, S100A1, MAPK8, S100A7, LIF, CCL4, CXCL13 are highly collaborated with other proteins. It reveals that these mediators are important in cytokine-protein network to regulate immune activity. Dyad in the collaboration networks can be defined as two proteins and they appear in one cytokine collaboration relationship. The dyad act degree distribution can also be well described by typical SPL functions. [1] Assortativity and act degree distribution of some collaboration networks, Hui Chang, Bei-Bei Su, Yue-Ping Zhou, Daren He, Physica A, 383 (2007) 687-702
ERIC Educational Resources Information Center
Lancaster, F. W.
1989-01-01
Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…
2017-01-01
The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432
Xing, Lizhi
2017-01-01
The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative
Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-01-01
Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods: We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative.
Won, Brian; Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-03-16
An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium.
Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.
2017-01-01
Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts.
Ravagli, Carlo; Pognan, Francois; Marc, Philippe
2017-01-01
The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com This software is designed to run on a Java EE application server and store data in a relational database. philippe.marc@novartis.com. © The Author 2016. Published by Oxford University Press.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts
Ravagli, Carlo; Pognan, Francois
2017-01-01
Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099
Database integration in a multimedia-modeling environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorow, Kevin E.
2002-09-02
Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less
Page, Kimberly; Mirzazadeh, Ali; Rice, Thomas M; Grebely, Jason; Kim, Arthur Y; Cox, Andrea L; Morris, Meghan D; Hellard, Margaret; Bruneau, Julie; Shoukry, Naglaa H; Dore, Gregory J; Maher, Lisa; Lloyd, Andrew R; Lauer, Georg; Prins, Maria; McGovern, Barbara H
2016-01-01
Symptomatic acute HCV infection and interferon lambda 4 (IFNL4) genotypes are important predictors of spontaneous viral clearance. Using data from a multicohort database (Injecting Cohorts [InC3] Collaborative), we establish an independent association between host IFNL4 genotype and symptoms of acute hepatitis C virus infection. This association potentially explains the higher spontaneous clearance observed in some patients with symptomatic disease.
Page, Kimberly; Mirzazadeh, Ali; Rice, Thomas M.; Grebely, Jason; Kim, Arthur Y.; Cox, Andrea L.; Morris, Meghan D.; Hellard, Margaret; Bruneau, Julie; Shoukry, Naglaa H.; Dore, Gregory J.; Maher, Lisa; Lloyd, Andrew R.; Lauer, Georg; Prins, Maria; McGovern, Barbara H.
2016-01-01
Symptomatic acute HCV infection and interferon lambda 4 (IFNL4) genotypes are important predictors of spontaneous viral clearance. Using data from a multicohort database (Injecting Cohorts [InC3] Collaborative), we establish an independent association between host IFNL4 genotype and symptoms of acute hepatitis C virus infection. This association potentially explains the higher spontaneous clearance observed in some patients with symptomatic disease. PMID:26973850
ERIC Educational Resources Information Center
Kahle, Brewster; Prelinger, Rick; Jackson, Mary E.; Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.; Witten, Ian H.; Bainbridge, David; Boddie, Stefan J.; Garrison, William A.; Cunningham, Sally Jo; Borgman, Christine L.; Hessel, Heather
2001-01-01
These six articles discuss various issues relating to digital libraries. Highlights include public access to digital materials; intellectual property concerns; the need for collaboration across disciplines; Greenstone software for construction and presentation of digital information collections; the Colorado Digitization Project; and conferences…
Towards a collaborative, global infrastructure for biodiversity assessment
Guralnick, Robert P; Hill, Andrew W; Lane, Meredith
2007-01-01
Biodiversity data are rapidly becoming available over the Internet in common formats that promote sharing and exchange. Currently, these data are somewhat problematic, primarily with regard to geographic and taxonomic accuracy, for use in ecological research, natural resources management and conservation decision-making. However, web-based georeferencing tools that utilize best practices and gazetteer databases can be employed to improve geographic data. Taxonomic data quality can be improved through web-enabled valid taxon names databases and services, as well as more efficient mechanisms to return systematic research results and taxonomic misidentification rates back to the biodiversity community. Both of these are under construction. A separate but related challenge will be developing web-based visualization and analysis tools for tracking biodiversity change. Our aim was to discuss how such tools, combined with data of enhanced quality, will help transform today's portals to raw biodiversity data into nexuses of collaborative creation and sharing of biodiversity knowledge. PMID:17594421
Therapy Decision Support Based on Recommender System Methods
Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen
2017-01-01
We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system. PMID:29065657
Social inertia and diversity in collaboration networks
NASA Astrophysics Data System (ADS)
Ramasco, J. J.
2007-04-01
Random graphs are useful tools to study social interactions. In particular, the use of weighted random graphs allows to handle a high level of information concerning which agents interact and in which degree the interactions take place. Taking advantage of this representation, we recently defined a magnitude, the Social Inertia, that measures the eagerness of agents to keep ties with previous partners. To study this magnitude, we used collaboration networks that are specially appropriate to obtain valid statitical results due to the large size of publically available databases. In this work, I study the Social Inertia in two of these empirical networks, IMDB movie database and condmat. More specifically, I focus on how the Inertia relates to other properties of the graphs, and show that the Inertia provides information on how the weight of neighboring edges correlates. A social interpretation of this effect is also offered.
Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka
2016-01-01
Introduction Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Methods Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Results Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. Conclusion We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. PMID:26655560
Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka
2016-01-01
Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥ 8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.
Skill complementarity enhances heterophily in collaboration networks
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
Transparent Global Seismic Hazard and Risk Assessment
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Pinho, Rui; Crowley, Helen
2013-04-01
Vulnerability to earthquakes is increasing, yet advanced reliable risk assessment tools and data are inaccessible to most, despite being a critical basis for managing risk. Also, there are few, if any, global standards that allow us to compare risk between various locations. The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange, and leverages the knowledge of leading experts for the benefit of society. Sharing of data and risk information, best practices, and approaches across the globe is key to assessing risk more effectively. Through global projects, open-source IT development and collaborations with more than 10 regions, leading experts are collaboratively developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. Guided by the needs and experiences of governments, companies and citizens at large, they work in continuous interaction with the wider community. A continuously expanding public-private partnership constitutes the GEM Foundation, which drives the collaborative GEM effort. An integrated and holistic approach to risk is key to GEM's risk assessment platform, OpenQuake, that integrates all above-mentioned contributions and will become available towards the end of 2014. Stakeholders worldwide will be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Homogenized information on hazard can be combined with data on exposure (buildings, population) and data on their vulnerability, for loss assessment around the globe. Furthermore, for a true integrated view of seismic risk, users can add social vulnerability and resilience indices to maps and estimate the costs and benefits of different risk management measures. The following global data, models and methodologies will be available in the platform. Some of these will be released to the public already before, such as the ISC-GEM global instrumental catalogue (released January 2013). Datasets: • Global Earthquake History Catalogue [1000-1903] • Global Instrumental Catalogue [1900-2009] • Global Geodetic Strain Rate Model • Global Active Fault Database • Tectonic Regionalisation • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerability Database • Socio-Economic Vulnerability and Resilience Indicators Models: • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) The aforementioned models developed under the GEM framework will be combined to produce estimates of hazard and risk at a global scale. Furthermore, building on many ongoing efforts and knowledge of scientists worldwide, GEM will integrate state-of-the-art data, models, results and open-source tools into a single platform that is to serve as a "clearinghouse" on seismic risk. The platform will continue to increase in value, in particular for use in local contexts, through contributions and collaborations with scientists and organisations worldwide.
YTPdb: a wiki database of yeast membrane transporters.
Brohée, Sylvain; Barriot, Roland; Moreau, Yves; André, Bruno
2010-10-01
Membrane transporters constitute one of the largest functional categories of proteins in all organisms. In the yeast Saccharomyces cerevisiae, this represents about 300 proteins ( approximately 5% of the proteome). We here present the Yeast Transport Protein database (YTPdb), a user-friendly collaborative resource dedicated to the precise classification and annotation of yeast transporters. YTPdb exploits an evolution of the MediaWiki web engine used for popular collaborative databases like Wikipedia, allowing every registered user to edit the data in a user-friendly manner. Proteins in YTPdb are classified on the basis of functional criteria such as subcellular location or their substrate compounds. These classifications are hierarchical, allowing queries to be performed at various levels, from highly specific (e.g. ammonium as a substrate or the vacuole as a location) to broader (e.g. cation as a substrate or inner membranes as location). Other resources accessible for each transporter via YTPdb include post-translational modifications, K(m) values, a permanently updated bibliography, and a hierarchical classification into families. The YTPdb concept can be extrapolated to other organisms and could even be applied for other functional categories of proteins. YTPdb is accessible at http://homes.esat.kuleuven.be/ytpdb/. Copyright © 2010 Elsevier B.V. All rights reserved.
CHIP Demonstrator: Semantics-Driven Recommendations and Museum Tour Generation
NASA Astrophysics Data System (ADS)
Aroyo, Lora; Stash, Natalia; Wang, Yiwen; Gorgels, Peter; Rutledge, Lloyd
The main objective of the CHIP project is to demonstrate how Semantic Web technologies can be deployed to provide personalized access to digital museum collections. We illustrate our approach with the digital database ARIA of the Rijksmuseum Amsterdam. For the semantic enrichment of the Rijksmuseum ARIA database we collaborated with the CATCH STITCH project to produce mappings to Iconclass, and with the MultimediaN E-culture project to produce the RDF/OWL of the ARIA and Adlib databases. The main focus of CHIP is on exploring the potential of applying adaptation techniques to provide personalized experience for the museum visitors both on the Web site and in the museum.
Epistemonikos: a free, relational, collaborative, multilingual database of health evidence.
Rada, Gabriel; Pérez, Daniel; Capurro, Daniel
2013-01-01
Epistemonikos (www.epistemonikos.org) is a free, multilingual database of the best available health evidence. This paper describes the design, development and implementation of the Epistemonikos project. Using several web technologies to store systematic reviews, their included articles, overviews of reviews and structured summaries, Epistemonikos is able to provide a simple and powerful search tool to access health evidence for sound decision making. Currently, Epistemonikos stores more than 115,000 unique documents and more than 100,000 relationships between documents. In addition, since its database is translated into 9 different languages, Epistemonikos ensures that non-English speaking decision-makers can access the best available evidence without language barriers.
Advancing Precambrian palaeomagnetism with the PALEOMAGIA and PINT(QPI) databases
Veikkolainen, Toni H.; Biggin, Andrew J.; Pesonen, Lauri J.; Evans, David A.; Jarboe, Nicholas A.
2017-01-01
State-of-the-art measurements of the direction and intensity of Earth’s ancient magnetic field have made important contributions to our understanding of the geology and palaeogeography of Precambrian Earth. The PALEOMAGIA and PINT(QPI) databases provide thorough public collections of important palaeomagnetic data of this kind. They comprise more than 4,100 observations in total and have been essential in supporting our international collaborative efforts to understand Earth's magnetic history on a timescale far longer than that of the present Phanerozoic Eon. Here, we provide an overview of the technical structure and applications of both databases, paying particular attention to recent improvements and discoveries. PMID:28534869
Advancing Precambrian palaeomagnetism with the PALEOMAGIA and PINT(QPI) databases.
Veikkolainen, Toni H; Biggin, Andrew J; Pesonen, Lauri J; Evans, David A; Jarboe, Nicholas A
2017-05-23
State-of-the-art measurements of the direction and intensity of Earth's ancient magnetic field have made important contributions to our understanding of the geology and palaeogeography of Precambrian Earth. The PALEOMAGIA and PINT( QPI ) databases provide thorough public collections of important palaeomagnetic data of this kind. They comprise more than 4,100 observations in total and have been essential in supporting our international collaborative efforts to understand Earth's magnetic history on a timescale far longer than that of the present Phanerozoic Eon. Here, we provide an overview of the technical structure and applications of both databases, paying particular attention to recent improvements and discoveries.
A DBMS-based medical teleconferencing system.
Chun, J; Kim, H; Lee, S; Choi, J; Cho, H
2001-01-01
This article presents the design of a medical teleconferencing system that is integrated with a multimedia patient database and incorporates easy-to-use tools and functions to effectively support collaborative work between physicians in remote locations. The design provides a virtual workspace that allows physicians to collectively view various kinds of patient data. By integrating the teleconferencing function into this workspace, physicians are able to conduct conferences using the same interface and have real-time access to the database during conference sessions. The authors have implemented a prototype based on this design. The prototype uses a high-speed network test bed and a manually created substitute for the integrated patient database.
A DBMS-based Medical Teleconferencing System
Chun, Jonghoon; Kim, Hanjoon; Lee, Sang-goo; Choi, Jinwook; Cho, Hanik
2001-01-01
This article presents the design of a medical teleconferencing system that is integrated with a multimedia patient database and incorporates easy-to-use tools and functions to effectively support collaborative work between physicians in remote locations. The design provides a virtual workspace that allows physicians to collectively view various kinds of patient data. By integrating the teleconferencing function into this workspace, physicians are able to conduct conferences using the same interface and have real-time access to the database during conference sessions. The authors have implemented a prototype based on this design. The prototype uses a high-speed network test bed and a manually created substitute for the integrated patient database. PMID:11522766
Diabetes research in Middle East countries; a scientometrics study from 1990 to 2012
Peykari, Niloofar; Djalalinia, Shirin; Kasaeian, Amir; Naderimagham, Shohreh; Hasannia, Tahereh; Larijani, Bagher; Farzadfar, Farshad
2015-01-01
Background: Diabetes burden is a serious warning for urgent action plan across the world. Knowledge production in this context could provide evidences for more efficient interventions. Aimed to that, we quantify the trend of diabetes research outputs of Middle East countries focusing on the scientific publication numbers, citations, and international collaboration. Materials and Methods: This scientometrics study was performed based on the systematic analysis through three international databases; ISI, PubMed, and Scopus from 1990 to 2012. International collaboration of Middle East countries and citations was analyzed based on Scopus. Diabetes’ publications in Iran specifically were assessed, and frequent used terms were mapped by VOSviewer software. Results: Over 23-year period, the number of diabetes publications and related citations in Middle East countries had increasing trend. The number of articles on diabetes in ISI, PubMed, and Scopus were respectively; 13,994, 11,336, and 20,707. Turkey, Israel, Iran, Saudi Arabia, and Egypt have devoted the five top competition positions. In addition, Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Iran in all databases stands on third position and produced 12.7% of diabetes publications within region. Regarding diabetes researches, the frequent used terms in Iranian articles were “effect,” “woman,” and “metabolic syndrome.” Conclusion: Ascending trend of diabetes research outputs in Middle East countries is appreciated but encouraging to strategic planning for maintaining this trend, and more collaboration between researchers is needed to regional health promotion. PMID:26109972
Dwyer, Johanna T.; Picciano, Mary Frances; Betz, Joseph M.; Fisher, Kenneth D.; Saldanha, Leila G.; Yetley, Elizabeth A.; Coates, Paul M.; Milner, John A.; Whitted, Jackie; Burt, Vicki; Radimer, Kathy; Wilger, Jaimie; Sharpless, Katherine E.; Holden, Joanne M.; Andrews, Karen; Roseland, Janet; Zhao, Cuiwei; Schweitzer, Amy; Harnly, James; Wolf, Wayne R.; Perry, Charles R.
2013-01-01
Although an estimated 50% of adults in the United States consume dietary supplements, analytically substantiated data on their bioactive constituents are sparse. Several programs funded by the Office of Dietary Supplements (ODS) at the National Institutes of Health enhance dietary supplement database development and help to better describe the quantitative and qualitative contributions of dietary supplements to total dietary intakes. ODS, in collaboration with the United States Department of Agriculture, is developing a Dietary Supplement Ingredient Database (DSID) verified by chemical analysis. The products chosen initially for analytical verification are adult multivitamin-mineral supplements (MVMs). These products are widely used, analytical methods are available for determining key constituents, and a certified reference material is in development. Also MVMs have no standard scientific, regulatory, or marketplace definitions and have widely varying compositions, characteristics, and bioavailability. Furthermore, the extent to which actual amounts of vitamins and minerals in a product deviate from label values is not known. Ultimately, DSID will prove useful to professionals in permitting more accurate estimation of the contribution of dietary supplements to total dietary intakes of nutrients and better evaluation of the role of dietary supplements in promoting health and well-being. ODS is also collaborating with the National Center for Health Statistics to enhance the National Health and Nutrition Examination Survey dietary supplement label database. The newest ODS effort explores the feasibility and practicality of developing a database of all dietary supplement labels marketed in the US. This article describes these and supporting projects. PMID:25346570
Virtual Teaching on the Tundra.
ERIC Educational Resources Information Center
McAuley, Alexander
1998-01-01
Describes how a teacher and a distance-learning consultant collaborate in using the Internet and Computer Supported Intentional Learning Environment (CISILE) to connect multicultural students on the harsh Baffin Island (Canada). Discusses the creation of the class's database and future implications. (AEF)
USDA dietary supplement ingredient database, release 2
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory (NDL),Beltsville Human Nutrition Research Center (BHNRC), Agricultural Research Service (ARS), USDA, in collaboration with the Office of Dietary Supplements, National Institutes of Health (ODS/NIH) and other federal agencies has developed a Dietary Supplement Ingredient ...
NASA Astrophysics Data System (ADS)
Haubt, R.
2016-06-01
This paper explores a Radical Collaborative Approach in the global and centralized Rock-Art Database project to find new ways to look at rock-art by making information more accessible and more visible through public contributions. It looks at rock-art through the Key Performance Indicator (KPI), identified with the latest Australian State of the Environment Reports to help develop a better understanding of rock-art within a broader Cultural and Indigenous Heritage context. Using a practice-led approach the project develops a conceptual collaborative model that is deployed within the RADB Management System. Exploring learning theory, human-based computation and participant motivation the paper develops a procedure for deploying collaborative functions within the interface design of the RADB Management System. The paper presents the results of the collaborative model implementation and discusses considerations for the next iteration of the RADB Universe within an Agile Development Approach.
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi; Thuptimdang, Wanwara; DeMarco, John; Liu, Brent J.
2014-03-01
We have built a decision support system that provides recommendations for customizing radiation therapy treatment plans, based on patient models generated from a database of retrospective planning data. This database consists of relevant metadata and information derived from the following DICOM objects - CT images, RT Structure Set, RT Dose and RT Plan. The usefulness and accuracy of such patient models partly depends on the sample size of the learning data set. Our current goal is to increase this sample size by expanding our decision support system into a collaborative framework to include contributions from multiple collaborators. Potential collaborators are often reluctant to upload even anonymized patient files to repositories outside their local organizational network in order to avoid any conflicts with HIPAA Privacy and Security Rules. We have circumvented this problem by developing a tool that can parse DICOM files on the client's side and extract de-identified numeric and text data from DICOM RT headers for uploading to a centralized system. As a result, the DICOM files containing PHI remain local to the client side. This is a novel workflow that results in adding only relevant yet valuable data from DICOM files to the centralized decision support knowledge base in such a way that the DICOM files never leave the contributor's local workstation in a cloud-based environment. Such a workflow serves to encourage clinicians to contribute data for research endeavors by ensuring protection of electronic patient data.
GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database
NASA Astrophysics Data System (ADS)
Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.
2003-09-01
The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.
The ALICE Glance Shift Accounting Management System (SAMS)
NASA Astrophysics Data System (ADS)
Martins Silva, H.; Abreu Da Silva, I.; Ronchetti, F.; Telesca, A.; Maidantchik, C.
2015-12-01
ALICE (A Large Ion Collider Experiment) is an experiment at the CERN LHC (Large Hadron Collider) studying the physics of strongly interacting matter and the quark-gluon plasma. The experiment operation requires a 24 hours a day and 7 days a week shift crew at the experimental site, composed by the ALICE collaboration members. Shift duties are calculated for each institute according to their correlated members. In order to ensure the full coverage of the experiment operation as well as its good quality, the ALICE Shift Accounting Management System (SAMS) is used to manage the shift bookings as well as the needed training. ALICE SAMS is the result of a joint effort between the Federal University of Rio de Janeiro (UFRJ) and the ALICE Collaboration. The Glance technology, developed by the UFRJ and the ATLAS experiment, sits at the basis of the system as an intermediate layer isolating the particularities of the databases. In this paper, we describe the ALICE SAMS development process and functionalities. The database has been modelled according to the collaboration needs and is fully integrated with the ALICE Collaboration repository to access members information and respectively roles and activities. Run, period and training coordinators can manage their subsystem operation and ensure an efficient personnel management. Members of the ALICE collaboration can book shifts and on-call according to pre-defined rights. ALICE SAMS features a user profile containing all the statistics and user contact information as well as the Institutes profile. Both the user and institute profiles are public (within the scope of the collaboration) and show the credit balance in real time. A shift calendar allows the Run Coordinator to plan data taking periods in terms of which subsystems shifts are enabled or disabled and on-call responsible people and slots. An overview display presents the shift crew present in the control room and allows the Run Coordination team to confirm the presence of both regular and trainees shift personnel, necessary for credit accounting.
The Sequenced Angiosperm Genomes and Genome Databases.
Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng
2018-01-01
Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.
The Sequenced Angiosperm Genomes and Genome Databases
Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng
2018-01-01
Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology. PMID:29706973
Greenlee, Dave
2007-01-01
A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.
ERIC Educational Resources Information Center
Paul, Karin; Kuhlthau, Carol C.; Branch, Jennifer L.; Solowan, Diane Galloway; Case, Roland; Abilock, Debbie; Eisenberg, Michael B.; Koechlin, Carol; Zwaan, Sandi; Hughes, Sandra; Low, Ann; Litch, Margaret; Lowry, Cindy; Irvine, Linda; Stimson, Margaret; Schlarb, Irene; Wilson, Janet; Warriner, Emily; Parsons, Les; Luongo-Orlando, Katherine; Hamilton, Donald
2003-01-01
Includes 19 articles that address issues related to library skills and Canadian school libraries. Topics include information literacy; inquiry learning; critical thinking and electronic research; collaborative inquiry; information skills and the Big 6 approach to problem solving; student use of online databases; library skills; Internet accuracy;…
De Groote, Sandra L; Shultz, Mary; Blecic, Deborah D
2014-07-01
The research assesses the information-seeking behaviors of health sciences faculty, including their use of online databases, journals, and social media. A survey was designed and distributed via email to 754 health sciences faculty at a large urban research university with 6 health sciences colleges. Twenty-six percent (198) of faculty responded. MEDLINE was the primary database utilized, with 78.5% respondents indicating they use the database at least once a week. Compared to MEDLINE, Google was utilized more often on a daily basis. Other databases showed much lower usage. Low use of online databases other than MEDLINE, link-out tools to online journals, and online social media and collaboration tools demonstrates a need for meaningful promotion of online resources and informatics literacy instruction for faculty. Library resources are plentiful and perhaps somewhat overwhelming. Librarians need to help faculty discover and utilize the resources and tools that libraries have to offer.
Information resources at the National Center for Biotechnology Information.
Woodsmall, R M; Benson, D A
1993-01-01
The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583
Botto, Lorenzo D.; Robert-Gnansia, Elisabeth; Siffel, Csaba; Harris, John; Borman, Barry; Mastroiacovo, Pierpaolo
2006-01-01
The International Clearing-house for Birth Defects Surveillance and Research, formerly known as International Clearinghouse of Birth Defects Monitoring Systems, consists of 40 registries worldwide that collaborate in monitoring 40 types of birth defects. Clearinghouse activities include the sharing and joint monitoring of birth defect data, epidemiologic and public health research, and capacity building, with the goal of reducing disease and promoting healthy birth outcomes through primary prevention. We discuss 3 of these activities: the collaborative assessment of the potential teratogenicity of first-trimester use of medications (the MADRE project), an example of the intersection of surveillance and research; the international databases of people with orofacial clefts, an example of the evolution from surveillance to outcome research; and the study of genetic polymorphisms, an example of collaboration in public health genetics. PMID:16571708
Analysis of scientific collaboration in Chinese psychiatry research.
Wu, Ying; Jin, Xing
2016-05-26
In recent decades, China has changed profoundly, becoming the country with the world's second-largest economy. The proportion of the Chinese population suffering from mental disorder has grown in parallel with the rapid economic development, as social stresses have increased. The aim of this study is to shed light on the status of collaborations in the Chinese psychiatry field, of which there is currently limited research. We sampled 16,224 publications (2003-2012) from 10 core psychiatry journals from Chinese National Knowledge Infrastructure (CNKI) and WanFang Database. We used various social network analysis (SNA) methods such as centrality analysis, and Core-Periphery analysis to study collaboration. We also used hierarchical clustering analysis in this study. From 2003-2012, there were increasing collaborations at the level of authors, institutions and regions in the Chinese psychiatry field. Geographically, these collaborations were distributed unevenly. The 100 most prolific authors and institutions and 32 regions were used to construct the collaboration map, from which we detected the core author, institution and region. Collaborative behavior was affected by economic development. We should encourage collaborative behavior in the Chinese psychiatry field, as this facilitates knowledge distribution, resource sharing and information acquisition. Collaboration has also helped the field narrow its current research focus, providing further evidence to inform policymakers to fund research in order to tackle the increase in mental disorder facing modern China.
A Proposed Collaborative Framework for Prefabricated Housing Construction Using RFID Technology
NASA Astrophysics Data System (ADS)
Charnwasununth, Phatsaphan; Yabuki, Nobuyoshi; Tongthong, Tanit
Despite the popularity of prefabricated housing construction in Thailand and many other countries, due to the lack of collaboration in current practice, undesired low productivity and a number of mistakes are identified. This research proposes a framework to raise the collaborative level for improving productivity and reducing mistake occurrences at sites. In this framework, RFID system bridges the gap between the real situation and the design, and the proposed system can cope with the unexpected construction conditions by generating proper alternatives. This system is composed of PDAs, RFID readers, laptop PCs, and a desktop PC. Six main modules and a database system are implemented in laptop PCs for recording actual site conditions, generating working alternatives, providing related information, and evaluating the work.
A scientometrics and social network analysis of Malaysian research in physics
NASA Astrophysics Data System (ADS)
Tan, H. X.; Ujum, E. A.; Ratnavelu, K.
2014-03-01
This conference proceeding presents an empirical assessment on the domestic publication output and structure of scientific collaboration of Malaysian authors for the field of physics. Journal articles with Malaysian addresses for the subject area "Physics" and other sub-discipline of physics were retrieved from the Thomson Reuters Web of Knowledge database spanning the years 1980 to 2011. A scientometrics and social network analysis of the Malaysian physics field was conducted to examine the publication growth and distribution of domestic collaborative publications; the giant component analysis; and the degree, closeness, and betweenness centralisation scores for the domestic co-authorship networks. Using these methods, we are able to gain insights on the evolution of collaboration and scientometric dimensions of Malaysian research in physics over time.
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Pruitt, Kim D.; Tatusova, Tatiana; Maglott, Donna R.
2005-01-01
The National Center for Biotechnology Information (NCBI) Reference Sequence (RefSeq) database (http://www.ncbi.nlm.nih.gov/RefSeq/) provides a non-redundant collection of sequences representing genomic data, transcripts and proteins. Although the goal is to provide a comprehensive dataset representing the complete sequence information for any given species, the database pragmatically includes sequence data that are currently publicly available in the archival databases. The database incorporates data from over 2400 organisms and includes over one million proteins representing significant taxonomic diversity spanning prokaryotes, eukaryotes and viruses. Nucleotide and protein sequences are explicitly linked, and the sequences are linked to other resources including the NCBI Map Viewer and Gene. Sequences are annotated to include coding regions, conserved domains, variation, references, names, database cross-references, and other features using a combined approach of collaboration and other input from the scientific community, automated annotation, propagation from GenBank and curation by NCBI staff. PMID:15608248
Collaboration in health technology assessment (EUnetHTA joint action, 2010-2012): four case studies.
Huić, Mirjana; Nachtnebel, Anna; Zechmeister, Ingrid; Pasternak, Iris; Wild, Claudia
2013-07-01
The aim of this study was to present the first four collaborative health technology assessment (HTA) processes on health technologies of different types and life cycles targeted toward diverse HTA users and facilitators, as well as the barriers of these collaborations. Retrospective analysis, through four case studies, was performed on the first four collaboration experiences of agencies participating in the EUnetHTA Joint Action project (2010-12), comprising different types and life cycles of health technologies for a diverse target audience, and different types of collaboration. The methods used to initiate collaboration, partner contributions, the assessment methodology, report structure, time frame, and factors acting as possible barriers to and facilitators of this collaboration were described. Two ways were used to initiate collaboration in the first four collaborative HTA processes: active brokering of information, so-called "calls for collaboration," and individual contact between agencies after identifying a topic common to two agencies in the Planned and Ongoing Projects database. Several success factors are recognized: predefined project management, high degree of commitment to the project; adherence to timelines; high relevance of technology; a common understanding of the methods applied and advanced experience in HTA; finally, acceptance of English-written reports by decision makers in non-English-speaking countries. Barriers like late identification of collaborative partners, nonacceptance of English language and different methodology of assessment should be overcome. Timely and efficient, different collaborative HTA processes on relative efficacy/effectiveness and safety on different types and life cycles of health technologies, targeted toward diverse HTA users in Europe are possible. There are still barriers to overcome.
Protein Information Resource: a community resource for expert annotation of protein data
Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy
2001-01-01
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041
Research for and by Practitioners.
ERIC Educational Resources Information Center
Templin, Thomas J.; And Others
1992-01-01
Seven articles discuss research by and for practitioners. The topics include demystification of research for practitioners, experiences with helping teacher researchers, an application of a collaborative action research model, one health practitioner's experience, creating a dance research database, basic data analysis for nonresearchers, and why…
AIR QUALITY FORECAST DATABASE AND ANALYSIS
In 2003, NOAA and EPA signed a Memorandum of Agreement to collaborate on the design and implementation of a capability to produce daily air quality modeling forecast information for the U.S. NOAA's ETA meteorological model and EPA's Community Multiscale Air Quality (CMAQ) model ...
Savige, Judy; Dagher, Hayat; Povey, Sue
2014-07-01
This study examined whether gene-specific DNA variant databases for inherited diseases of the kidney fulfilled the Human Variome Project recommendations of being complete, accurate, clinically relevant and freely available. A recent review identified 60 inherited renal diseases caused by mutations in 132 genes. The disease name, MIM number, gene name, together with "mutation" or "database," were used to identify web-based databases. Fifty-nine diseases (98%) due to mutations in 128 genes had a variant database. Altogether there were 349 databases (a median of 3 per gene, range 0-6), but no gene had two databases with the same number of variants, and 165 (50%) databases included fewer than 10 variants. About half the databases (180, 54%) had been updated in the previous year. Few (77, 23%) were curated by "experts" but these included nine of the 11 with the most variants. Even fewer databases (41, 12%) included clinical features apart from the name of the associated disease. Most (223, 67%) could be accessed without charge, including those for 50 genes (40%) with the maximum number of variants. Future efforts should focus on encouraging experts to collaborate on a single database for each gene affected in inherited renal disease, including both unpublished variants, and clinical phenotypes. © 2014 WILEY PERIODICALS, INC.
Creating collaborative learning environments for transforming primary care practices now.
Miller, William L; Cohen-Katz, Joanne
2010-12-01
The renewal of primary care waits just ahead. The patient-centered medical home (PCMH) movement and a refreshing breeze of collaboration signal its arrival with demonstration projects and pilots appearing across the country. An early message from this work suggests that the development of collaborative, cross-disciplinary teams may be essential for the success of the PCMH. Our focus in this article is on training existing health care professionals toward being thriving members of this transformed clinical care team in a relationship-centered PCMH. Our description of the optimal conditions for collaborative training begins with delineating three types of teams and how they relate to levels of collaboration. We then describe how to create a supportive, safe learning environment for this type of training, using a different model of professional socialization, and tools for building culture. Critical skills related to practice development and the cross-disciplinary collaborative processes are also included. Despite significant obstacles in readying current clinicians to be members of thriving collaborative teams, a few next steps toward implementing collaborative training programs for existing professionals are possible using competency-based and adult learning approaches. Grasping the long awaited arrival of collaborative primary health care will also require delivery system and payment reform. Until that happens, there is an abundance of work to be done envisioning new collaborative training programs and initiating a nation-wide effort to motivate and reeducate our colleagues. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Durack, Jeremy C.; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P.; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research. PMID:12463820
Durack, Jeremy C; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research.
RNAcentral: A comprehensive database of non-coding RNA sequences
Williams, Kelly Porter; Lau, Britney Yan
2016-10-28
RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similaritymore » searches as well as genome browsing functionality.« less
RNAcentral: A comprehensive database of non-coding RNA sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Kelly Porter; Lau, Britney Yan
RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similaritymore » searches as well as genome browsing functionality.« less
NASA Astrophysics Data System (ADS)
Bowring, J. F.; McLean, N. M.; Walker, J. D.; Gehrels, G. E.; Rubin, K. H.; Dutton, A.; Bowring, S. A.; Rioux, M. E.
2015-12-01
The Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES.org) has worked collaboratively for the last decade with geochronologists from EARTHTIME and EarthChem to build cyberinfrastructure geared to ensuring transparency and reproducibility in geoscience workflows and is engaged in refining and extending that work to serve additional geochronology domains during the next decade. ET_Redux (formerly U-Pb_Redux) is a free open-source software system that provides end-to-end support for the analysis of U-Pb geochronological data. The system reduces raw mass spectrometer (TIMS and LA-ICPMS) data to U-Pb dates, allows users to interpret ages from these data, and then facilitates the seamless federation of the results from one or more labs into a community web-accessible database using standard and open techniques. This EarthChem database - GeoChron.org - depends on keyed references to the System for Earth Sample Registration (SESAR) database that stores metadata about registered samples. These keys are each a unique International Geo Sample Number (IGSN) assigned to a sample and to its derivatives. ET_Redux provides for interaction with this archive, allowing analysts to store, maintain, retrieve, and share their data and analytical results electronically with whomever they choose. This initiative has created an open standard for the data elements of a complete reduction and analysis of U-Pb data, and is currently working to complete the same for U-series geochronology. We have demonstrated the utility of interdisciplinary collaboration between computer scientists and geoscientists in achieving a working and useful system that provides transparency and supports reproducibility, allowing geochemists to focus on their specialties. The software engineering community also benefits by acquiring research opportunities to improve development process methodologies used in the design, implementation, and sustainability of domain-specific software.
Formal ontology for natural language processing and the integration of biomedical databases.
Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry
2006-01-01
The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].
NASA Astrophysics Data System (ADS)
Steigies, C. T.
2015-12-01
Since the International Geophysical Year (IGY) in 1957-58 cosmic rays areroutinely measured by many ground-based Neutron Monitors (NM) around theworld. The World Data Center for Cosmic Rays (WDCCR) was established as apart of this activity and is providing a database of cosmic-ray neutronobservations in unified formats. However, that standard data comprises onlyof one hour averages, whereas most NM stations have been enhanced at the endof the 20th century to provide data in one minute resolution or even better.This data was only available on the web-sites of the institutes operatingthe station, and every station invented their own data format for thehigh-resolution measurements. There were some efforts to collect data fromseveral stations, to make this data available on FTP servers, however noneof these efforts could provide real-time data for all stations.The EU FP7 project NMDB (real-time database for high-resolution NeutronMonitor measurements, http://nmdb.eu) was funded by the European Commission,and a new database was set up by several Neutron Monitor stations in Europeand Asia to store high-resolution data and to provide access to the data inreal-time (i.e. less than five minute delay). By storing the measurements ina database, a standard format for the high-resolution measurements isenforced. This database is complementary to the WDCCR, as it does not (yet)provide all historical data, but the creation of this effort has spurred anew collaboration between Neutron Monitor scientists worldwide, (new)stations have gone online (again), new projects are building on the resultsof NMDB, new users outside of the Cosmic Ray community are starting to useNM data for new applications like soil moisture measurements using cosmicrays. These applications are facilitated by the easy access to the data withthe http://nest.nmdb.eu interface that offers access to all NMDB data forall users.
The Astrobiology Habitable Environments Database (AHED)
NASA Astrophysics Data System (ADS)
Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.
2015-12-01
The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.
PREPping Students for Authentic Science
ERIC Educational Resources Information Center
Dolan, Erin L.; Lally, David J.; Brooks, Eric; Tax, Frans E.
2008-01-01
In this article, the authors describe a large-scale research collaboration, the Partnership for Research and Education in Plants (PREP), which has capitalized on publicly available databases that contain massive amounts of biological information; stock centers that house and distribute inexpensive organisms with different genotypes; and the…
ERIC Educational Resources Information Center
DiConsiglio, John
2012-01-01
In this article, the author talks about the significance of the collaboration between alumni relations and student affairs offices in overcoming misinformation and silos. Each has something the other wants. For the alumni office, student affairs offers a treasure trove of resources. They have databases with contact information, affinity-based…
Medical informatics and telemedicine: A vision
NASA Technical Reports Server (NTRS)
Clemmer, Terry P.
1991-01-01
The goal of medical informatics is to improve care. This requires the commitment and harmonious collaboration between the computer scientists and clinicians and an integrated database. The vision described is how medical information systems are going to impact the way medical care is delivered in the future.
Secure web book to store structural genomics research data.
Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo
2003-01-01
Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.
Schutt, Michelle A; Hightower, Barbara
2009-02-01
The American Association of Colleges of Nursing advocates that professional nurses have the information literacy skills essential for evidence-based practice. As nursing schools embrace evidence-based models to prepare students for nursing careers, faculty can collaborate with librarians to create engaging learning activities focused on the development of information literacy skills. Instructional technology tools such as course management systems, virtual classrooms, and online tutorials provide opportunities to reach students outside the traditional campus classroom. This article discusses the collaborative process between faculty and a library instruction coordinator and strategies used to create literacy learning activities focused on the development of basic database search skills for a Computers in Nursing course. The activities and an online tutorial were included in a library database module incorporated into WebCT. In addition, synchronous classroom meeting software was used by the librarian to reach students in the distance learning environment. Recommendations for module modifications and faculty, librarian, and student evaluations are offered.
C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training
Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter
2008-01-01
The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178
Evidence for the impact of quality improvement collaboratives: systematic review
2008-01-01
Objective To evaluate the effectiveness of quality improvement collaboratives in improving the quality of care. Data sources Relevant studies through Medline, Embase, PsycINFO, CINAHL, and Cochrane databases. Study selection Two reviewers independently extracted data on topics, participants, setting, study design, and outcomes. Data synthesis Of 1104 articles identified, 72 were included in the study. Twelve reports representing nine studies (including two randomised controlled trials) used a controlled design to measure the effects of the quality improvement collaborative intervention on care processes or outcomes of care. Systematic review of these nine studies showed moderate positive results. Seven studies (including one randomised controlled trial) reported an effect on some of the selected outcome measures. Two studies (including one randomised controlled trial) did not show any significant effect. Conclusions The evidence underlying quality improvement collaboratives is positive but limited and the effects cannot be predicted with great certainty. Considering that quality improvement collaboratives seem to play a key part in current strategies focused on accelerating improvement, but may have only modest effects on outcomes at best, further knowledge of the basic components effectiveness, cost effectiveness, and success factors is crucial to determine the value of quality improvement collaboratives. PMID:18577559
Wang, Yanli; Bryant, Stephen H.; Cheng, Tiejun; Wang, Jiyao; Gindulyte, Asta; Shoemaker, Benjamin A.; Thiessen, Paul A.; He, Siqian; Zhang, Jian
2017-01-01
PubChem's BioAssay database (https://pubchem.ncbi.nlm.nih.gov) has served as a public repository for small-molecule and RNAi screening data since 2004 providing open access of its data content to the community. PubChem accepts data submission from worldwide researchers at academia, industry and government agencies. PubChem also collaborates with other chemical biology database stakeholders with data exchange. With over a decade's development effort, it becomes an important information resource supporting drug discovery and chemical biology research. To facilitate data discovery, PubChem is integrated with all other databases at NCBI. In this work, we provide an update for the PubChem BioAssay database describing several recent development including added sources of research data, redesigned BioAssay record page, new BioAssay classification browser and new features in the Upload system facilitating data sharing. PMID:27899599
The Brain Database: A Multimedia Neuroscience Database for Research and Teaching
Wertheim, Steven L.
1989-01-01
The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made. This tool is intended to serve as one component of a system that would allow neuroscientists and clinicians 1) to represent clinical and experimental data within a common framework 2) to compare results precisely between experiments and among laboratories, 3) to use computing tools as an aid in collaborative work and 4) to contribute to a shared and accessible body of knowledge about the nervous system.
Sharing and executing linked data queries in a collaborative environment.
García Godoy, María Jesús; López-Camacho, Esteban; Navas-Delgado, Ismael; Aldana-Montes, José F
2013-07-01
Life Sciences have emerged as a key domain in the Linked Data community because of the diversity of data semantics and formats available through a great variety of databases and web technologies. Thus, it has been used as the perfect domain for applications in the web of data. Unfortunately, bioinformaticians are not exploiting the full potential of this already available technology, and experts in Life Sciences have real problems to discover, understand and devise how to take advantage of these interlinked (integrated) data. In this article, we present Bioqueries, a wiki-based portal that is aimed at community building around biological Linked Data. This tool has been designed to aid bioinformaticians in developing SPARQL queries to access biological databases exposed as Linked Data, and also to help biologists gain a deeper insight into the potential use of this technology. This public space offers several services and a collaborative infrastructure to stimulate the consumption of biological Linked Data and, therefore, contribute to implementing the benefits of the web of data in this domain. Bioqueries currently contains 215 query entries grouped by database and theme, 230 registered users and 44 end points that contain biological Resource Description Framework information. The Bioqueries portal is freely accessible at http://bioqueries.uma.es. Supplementary data are available at Bioinformatics online.
Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-01-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586
A Conceptual Model and Database to Integrate Data and Project Management
NASA Astrophysics Data System (ADS)
Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.
2015-12-01
Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype database and build it in a way that is modular and can be changed or expanded to meet user needs. Our hope is that others, especially those managing large collaborative research grants, will be able to use our project model and database design to enhance the value of their project and data management both during and following the active research period.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-05
Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-04-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-03-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Gorbunkova, Angelina; Pagni, Giorgio; Brizhak, Anna; Farronato, Giampietro; Rasperini, Giulio
2016-01-01
The aim of this review is to describe the most commonly observed changes in periodontium caused by orthodontic treatment in order to facilitate specialists' collaboration and communication. An electronic database search was carried out using PubMed abstract and citation database and bibliographic material was then used in order to find other appropriate sources. Soft and hard periodontal tissues changes during orthodontic treatment and maintenance of the patients are discussed in order to provide an exhaustive picture of the possible interactions between these two interwoven disciplines. PMID:26904120
NASA-ONERA Collaboration on Human Factors in Aviation Accidents and Incidents
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Fabiani, Patrick
2012-01-01
This is the first annual report jointly prepared by NASA and ONERA on the work performed under the agreement to collaborate on a study of the human factors entailed in aviation accidents and incidents, particularly focused on the consequences of decreases in human performance associated with fatigue. The objective of this agreement is to generate reliable, automated procedures that improve understanding of the levels and characteristics of flight-crew fatigue factors whose confluence will likely result in unacceptable crew performance. This study entails the analyses of numerical and textual data collected during operational flights. NASA and ONERA are collaborating on the development and assessment of automated capabilities for extracting operationally significant information from very large, diverse (textual and numerical) databases; much larger than can be handled practically by human experts.
The USA National Phenology Network's Model for Collaborative Data Generation and Dissemination
NASA Astrophysics Data System (ADS)
Rosemartin, A.; Lincicome, A.; Denny, E. G.; Marsh, L.; Wilson, B. E.
2010-12-01
The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. The Network was founded as an NSF-funded Research Coordination Network, for the purpose of fostering collaboration among scientists, policy-makers and the general public to address the challenges posed by global change and its impact on ecosystems and human health. With this mission in mind, the USA-NPN has developed an Information Management System (IMS) to facilitate collaboration and participatory data collection and digitization. The IMS includes components for data storage, such as the National Phenology Database, as well as a Drupal website for information-sharing and data visualization, and a Java application for collection of contemporary observational data. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data and to be flexible to the changing needs of the network. The database allows for the collection, storage and output of phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), as well as integration with legacy data sets. Participants in the network can submit records (as Drupal content types) for publications, legacy data sets and phenology-related festivals. The USA-NPN’s contemporary phenology data collection effort, Nature’s Notebook also draws on the contributions of participants. Citizen scientists around the country submit data through this Java application (paired with the Drupal site through a shared login) on the life cycle stages of plants and animals in their yards and parks. The North American Bird Phenology Program, now a part of the USA-NPN, also relies on web-based crowdsourcing. Participants in this program are transcribing 6 million scanned paper cards that were collected by observers across the United States from 1880-1970 of migratory bird arrivals. The USA-NPN’s Information Management System represents a collaborative effort to collect, store, synthesize and output phenological data and information for plants, animals and the environment, and is poised to play an key role in understanding phenological response to environmental and climatic change at the local, regional and national scale.
Aquino, Maria Raisa Jessica Ryc V; Olander, Ellinor K; Needle, Justin J; Bryar, Rosamund M
2016-10-01
Interprofessional collaboration between midwives and health visitors working in maternal and child health services is widely encouraged. This systematic review aimed to identify existing and potential areas for collaboration between midwives and health visitors; explore the methods through which collaboration is and can be achieved; assess the effectiveness of this relationship between these groups, and ascertain whether the identified examples of collaboration are in line with clinical guidelines and policy. A narrative synthesis of qualitative and quantitative studies. Fourteen electronic databases, research mailing lists, recommendations from key authors and reference lists and citations of included papers. Papers were included if they explored one or a combination of: the areas of practice in which midwives and health visitors worked collaboratively; the methods that midwives and health visitors employed when communicating and collaborating with each other; the effectiveness of collaboration between midwives and health visitors; and whether collaborative practice between midwives and health visitors meet clinical guidelines. Papers were assessed for study quality. Eighteen papers (sixteen studies) met the inclusion criteria. The studies found that midwives and health visitors reported valuing interprofessional collaboration, however this was rare in practice. Findings show that collaboration could be useful across the service continuum, from antenatal care, transition of care/handover, to postnatal care. Evidence for the effectiveness of collaboration between these two groups was equivocal and based on self-reported data. In relation, multiple enablers and barriers to collaboration were identified. Communication was reportedly key to interprofessional collaboration. Interprofessional collaboration was valuable according to both midwives and health visitors, however, this was made challenging by several barriers such as poor communication, limited resources, and poor understanding of each other's role. Structural barriers such as physical distance also featured as a challenge to interprofessional collaboration. Although the findings are limited by variable methodological quality, these were consistent across time, geographical locations, and health settings, indicating transferability and reliability. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Collaborative research in the model spinal cord injury systems: process and outcomes.
Richards, J Scott
2002-01-01
To review the way in which collaborative research has been conducted under the National Institute on Disability and Rehabilitation Research (NIDRR)-funded Model Spinal Cord Injury Systems (MSCIS) Program, changes made in that process, and significant outcomes. A comparison of changes by NIDRR in the way collaborative research was competed and funded in the 1 995 and 2001 competitions. A review of outcomes of the 1 995 collaborative projects was based on queries to lead centers. Collaborative research through the model SCI systems has been conducted and continues to be conducted through 2 main venues: The National Spinal Cord Injury Statistical Center (NSCISC) database, which has provided data for a number of collaborative studies, and specifically funded proposals for collaborative research. In the 1995 competition for NIDRR funding, collaborative research proposals were submitted as part of the Model SCI Systems competitive applications. In the 2001 competition, collaborative research was parceled out and a separate competition held. There have been a number of publications stemming from the 1 995 competition; some of the data from these projects are still being explored and used for manuscripts. The outcomes for the 2001 competition will not be known for several years. Collaborative research has the advantage of generating larger numbers more quickly than any 1 center can typically generate, producing a more broadly based sample and, therefore, generalizable result, and facilitating the use of expertise not always available in a single center. Collaborative research activities have been among the most productive aspects of the Model Systems program; the change in the way this component is competed in the most recent competition is yet to be evaluated in terms of its efficacy compared with methods used for funding collaborative research in past competitions.
Varda, Danielle M.; Retrum, Jessica H.
2012-01-01
While the benefits of collaboration have become widely accepted and the practice of collaboration is growing within the public health system, a paucity of research exists that examines factors and mechanisms related to effective collaboration between public health and their partner organizations. The purpose of this paper is to address this gap by exploring the structural and organizational characteristics of public health collaboratives. Design and Methods. Using both social network analysis and traditional statistical methods, we conduct an exploratory secondary data analysis of 11 public health collaboratives chosen from across the United States. All collaboratives are part of the PARTNER (www.partnertool.net) database. We analyze data to identify relational patterns by exploring the structure (the way that organizations connect and exchange relationships), in relation to perceptions of value and trust, explanations for varying reports of success, and factors related to outcomes. We describe the characteristics of the collaboratives, types of resource contributions, outcomes of the collaboratives, perceptions of success, and reasons for success. We found high variation and significant differences within and between these collaboratives including perceptions of success. There were significant relationships among various factors such as resource contributions, reasons cited for success, and trust and value perceived by organizations. We find that although the unique structure of each collaborative makes it challenging to identify a specific set of factors to determine when a collaborative will be successful, the organizational characteristics and interorganizational dynamics do appear to impact outcomes. We recommend a quality improvement process that suggests matching assessment to goals and developing action steps for performance improvement. Acknowledgements the authors would like to thank the Robert Wood Johnson Foundation’s Public Health Program for funding for this research. PMID:25170462
Varda, Danielle M; Retrum, Jessica H
2012-06-15
While the benefits of collaboration have become widely accepted and the practice of collaboration is growing within the public health system, a paucity of research exists that examines factors and mechanisms related to effective collaboration between public health and their partner organizations. The purpose of this paper is to address this gap by exploring the structural and organizational characteristics of public health collaboratives. Design and Methods. Using both social network analysis and traditional statistical methods, we conduct an exploratory secondary data analysis of 11 public health collaboratives chosen from across the United States. All collaboratives are part of the PARTNER (www.partnertool.net) database. We analyze data to identify relational patterns by exploring the structure (the way that organizations connect and exchange relationships), in relation to perceptions of value and trust, explanations for varying reports of success, and factors related to outcomes. We describe the characteristics of the collaboratives, types of resource contributions, outcomes of the collaboratives, perceptions of success, and reasons for success. We found high variation and significant differences within and between these collaboratives including perceptions of success. There were significant relationships among various factors such as resource contributions, reasons cited for success, and trust and value perceived by organizations. We find that although the unique structure of each collaborative makes it challenging to identify a specific set of factors to determine when a collaborative will be successful, the organizational characteristics and interorganizational dynamics do appear to impact outcomes. We recommend a quality improvement process that suggests matching assessment to goals and developing action steps for performance improvement. the authors would like to thank the Robert Wood Johnson Foundation's Public Health Program for funding for this research.
Halonen, Jaana I; Atkins, Salla; Hakulinen, Hanna; Pesonen, Sanna; Uitti, Jukka
2017-01-05
Employees are major contributors to economic development, and occupational health services (OHS) can have an important role in supporting their health. Key to this is collaboration between employers and OHS. We reviewed the evidence regarding the characteristics of good collaboration between employers and OHS providers that is essential to construct more effective collaboration and services. A systematic review of the factors of good collaboration between employers and OHS providers was conducted. We searched five databases between January 2000 and March 2016 and back referenced included articles. Two reviewers evaluated 639 titles, 63 abstracts and 20 full articles, and agreed that six articles, all on qualitative studies, met the predetermined relevance and publication criteria and were included. Data were extracted by one reviewer and checked by a second reviewer and analysed using thematic analysis. Three themes and nine subthemes related to good collaboration were identified. The first theme included time, space and contract requirements for effective collaboration with three subthemes (i.e., key characteristics): flexible OHS/flexible contracts including tailor-made services accounting for the needs of the employer, geographical proximity of the stakeholders allowing easy access to services, and long-term contracts as collaboration develops over time. The second theme was related to characteristics of the dialogue in effective collaboration that consisted of shared goals, reciprocity, frequent contact and trust. According to the third theme the definition of roles of the stakeholders was important; OHS providers should have competence and knowledge about the workplace, become strategic partners with the employers as well as provide quality services. Although literature regarding collaboration between the employers and OHS providers was limited, we identified several key factors that contribute to effective collaboration. This information is useful in developing indicators of effective collaboration that will enable organisation of more effective OHS practices.
Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei
2017-06-01
We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.
Gramene 2016: comparative plant genomics and pathway resources
USDA-ARS?s Scientific Manuscript database
Gramene (http://www.gramene.org) is an online resource for comparative functional genomics in crops and model plant species. Its two main frameworks are genomes (collaboration with Ensembl Plants) and pathways (The Plant Reactome and archival BioCyc databases). Since our last NAR update, the data...
USDA Nutrient Data Set for Retail Veal Cuts
USDA-ARS?s Scientific Manuscript database
The U.S. Department of Agriculture (USDA) Nutrient Data Laboratory (NDL), in collaboration with Colorado State University, conducted a research study designed to update and expand the data on veal cuts in the USDA National Nutrient Database for Standard Reference (SR). This research has been necess...
Internet Issues and Applications, 1997-1998.
ERIC Educational Resources Information Center
Dempsey, Bert J., Ed.; Jones, Paul, Ed.
This book gives an overview of the leading-edge Internet application areas (streaming multimedia, collaborative tools, Web databases) and key information policy issues (privacy, censorship, information quality, and more). The text serves as a primer on understanding the forces--economic, legal, social, as well as technological--that are shaping…
Research on architecture of intelligent transportation cloud platform for Guangxi expressway
NASA Astrophysics Data System (ADS)
Hua, Pan; Huang, Zhongxiang; He, Zengzhen
2017-04-01
In view of the practical needs of the intelligent transportation business collaboration, a model on intelligent traffic business collaboration is established. Aarchitecture of intelligent traffic cloud platformfor high speed road is proposed which realizes the loose coupling of each intelligent traffic business module. Based on custom technology in database design, it realizes the dynamic customization of business function which means that different roles can dynamically added business functions according to the needs. Through its application in the development and implementation of the actual business system, the architecture is proved to be effective and feasible.
NASA Astrophysics Data System (ADS)
Walker, J. D.; Ash, J. M.; Bowring, J.; Bowring, S. A.; Deino, A. L.; Kislitsyn, R.; Koppers, A. A.
2009-12-01
One of the most onerous tasks in rigorous development of data reporting and databases for geochronological and thermochronological studies is to fully capture all of the metadata needed to completely document both the analytical work as well as the interpretation effort. This information is available in the data reduction programs used by researchers, but has proven difficult to harvest into either publications or databases. For this reason, the EarthChem and EARTHTIME efforts are collaborating to foster the next generation of data management and discovery for age information by integrating data reporting with data reduction. EarthChem is a community-driven effort to facilitate the discovery, access, and preservation of geochemical data of all types and to support research and enable new and better science. EARTHTIME is also a community-initiated project whose aim is to foster the next generation of high-precision geochronology and thermochoronology. In addition, collaboration with the CRONUS effort for cosmogenic radionuclides is in progress. EarthChem workers have met with groups working on the Ar-Ar, U-Pb, and (U-Th)/He systems to establish data reporting requirements as well as XML schemas to be used for transferring data from reduction programs to database. At present, we have prototype systems working for the U-Pb_Redux, ArArCalc, MassSpec, and Helios programs. In each program, the user can select to upload data and metadata to the GEOCHRON system hosted at EarthChem. There are two additional requirements for upload. The first is having a unique identifier (IGSN) obtained either manually or via web services contained within the reduction program from the SESAR system. The second is that the user selects whether the sample is to be available for discovery (public) or remain hidden (private). Search for data at the GEOCHRON portal can be done using age, method, mineral, or location parameters. Data can be downloaded in the full XML format for ingestion back into the reduction program or as abbreviated tables.
Lynx: a database and knowledge extraction engine for integrative medicine.
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.
Dwyer, Johanna T.; Picciano, Mary Frances; Betz, Joseph M.; Fisher, Kenneth D.; Saldanha, Leila G.; Yetley, Elizabeth A.; Coates, Paul M.; Radimer, Kathy; Bindewald, Bernadette; Sharpless, Katherine E.; Holden, Joanne; Andrews, Karen; Zhao, Cuiwei; Harnly, James; Wolf, Wayne R.; Perry, Charles R.
2013-01-01
Several activities of the Office of Dietary Supplements (ODS) at the National Institutes of Health involve enhancement of dietary supplement databases. These include an initiative with US Department of Agriculture to develop an analytically substantiated dietary supplement ingredient database (DSID) and collaboration with the National Center for Health Statistics to enhance the dietary supplement label database in the National Health and Nutrition Examination Survey (NHANES). The many challenges that must be dealt with in developing an analytically supported DSID include categorizing product types in the database, identifying nutrients, and other components of public health interest in these products and prioritizing which will be entered in the database first. Additional tasks include developing methods and reference materials for quantifying the constituents, finding qualified laboratories to measure the constituents, developing appropriate sample handling procedures, and finally developing representative sampling plans. Developing the NHANES dietary supplement label database has other challenges such as collecting information on dietary supplement use from NHANES respondents, constant updating and refining of information obtained, developing default values that can be used if the respondent cannot supply the exact supplement or strength that was consumed, and developing a publicly available label database. Federal partners and the research community are assisting in making an analytically supported dietary supplement database a reality. PMID:25309034
McCann, Liza J; Kirkham, Jamie J; Wedderburn, Lucy R; Pilkington, Clarissa; Huber, Adam M; Ravelli, Angelo; Appelbe, Duncan; Williamson, Paula R; Beresford, Michael W
2015-06-12
Juvenile dermatomyositis (JDM) is a rare autoimmune inflammatory disorder associated with significant morbidity and mortality. International collaboration is necessary to better understand the pathogenesis of the disease, response to treatment and long-term outcome. To aid international collaboration, it is essential to have a core set of data that all researchers and clinicians collect in a standardised way for clinical purposes and for research. This should include demographic details, diagnostic data and measures of disease activity, investigations and treatment. Variables in existing clinical registries have been compared to produce a provisional data set for JDM. We now aim to develop this into a consensus-approved minimum core dataset, tested in a wider setting, with the objective of achieving international agreement. A two-stage bespoke Delphi-process will engage the opinion of a large number of key stakeholders through Email distribution via established international paediatric rheumatology and myositis organisations. This, together with a formalised patient/parent participation process will help inform a consensus meeting of international experts that will utilise a nominal group technique (NGT). The resulting proposed minimal dataset will be tested for feasibility within existing database infrastructures. The developed minimal dataset will be sent to all internationally representative collaborators for final comment. The participants of the expert consensus group will be asked to draw together these comments, ratify and 'sign off' the final minimal dataset. An internationally agreed minimal dataset has the potential to significantly enhance collaboration, allow effective communication between groups, provide a minimal standard of care and enable analysis of the largest possible number of JDM patients to provide a greater understanding of this disease. The final approved minimum core dataset could be rapidly incorporated into national and international collaborative efforts, including existing prospective databases, and be available for use in randomised controlled trials and for treatment/protocol comparisons in cohort studies.
Watanabe, Yoshinori; Hirano, Yoko; Asami, Yuko; Okada, Maki; Fujita, Kazuya
2017-11-01
A unique database named 'AN-SAPO' was developed by Iwato Corp. and Japan Brain Corp. in collaboration with the psychiatric clinics run by Himorogi Group in Japan. The AN-SAPO database includes patients' depression/anxiety score data from a mobile app named AN-SAPO and medical records from medical prescription software named 'ORCA'. On the mobile app, depression/anxiety severity can be evaluated by answering 20 brief questions and the scores are transferred to the AN-SAPO database together with the patients' medical records on ORCA. Currently, this database is used at the Himorogi Group's psychiatric clinics and has over 2000 patients' records accumulated since November 2013. Since the database covers patients' demographic data, prescribed drugs, and the efficacy and safety information, it could be a useful supporting tool for decision-making in clinical practice. We expect it to be utilised in wider areas of medical fields and for future pharmacovigilance and pharmacoepidemiological studies.
The Internet and World-Wide-Web: Potential Benefits to Rural Schools.
ERIC Educational Resources Information Center
Barker, Bruce O.
The Internet is a decentralized collection of computer networks managed by separate groups using a common set of technical standards. The Internet has tremendous potential as an educational resource by providing access to networking through worldwide electronic mail, various databases, and electronic bulletin boards; collaborative investigation…
The Two-Communities Theory and Knowledge Utilization.
ERIC Educational Resources Information Center
Caplan, Nathan
1979-01-01
Discusses strategies to improve policy makers' utilization of research based on the "two-communities" theory that social scientists and policy makers live in two different worlds. Notes that for high level decision making, collaboration must involve more general problems and a decision to use either data-based or nonresearch knowledge for solving…
Evaluating Assessment Using N-Dimensional Filtering.
ERIC Educational Resources Information Center
Dron, Jon; Boyne, Chris; Mitchell, Richard
This paper describes the use of the CoFIND (Collaborative Filter in N Dimensions) system to evaluate two assessment styles. CoFIND is a resource database that organizes itself around its users' needs. Learners enter resources, categorize, then rate them using "qualities," aspects of resources which learners find worthwhile, the n…
Development of a Regional U.S. MARKAL Database for Energy and Emissions Modeling
The U.S. Climate Change Science Program (CCSP) is a collaborative effort among 13 agencies of the U.S. federal government. From the CCSP's 2003 strategic plan, its mission is to: "facilitate the creation and application of knowledge of the earth's global environment through resea...
2009-08-24
of some defined population […].” ( Thorndike , 1971, p. 533). Norms in this context would allow an organization to understand its relative standing on...York: McGraw-Hill. Thorndike , R. L. (1971). Educational Measurement (2nd Ed.). Washington, DC: American Council on Education. Undersecretary for
NREL Supercomputer Tackles Grid Challenges | News | NREL
traditional database processes. Photo by Dennis Schroeder, NREL "Big data" is playing an imagery, and large-scale simulation data. Photo by Dennis Schroeder, NREL "Peregrine provides much . Photo by Dennis Schroeder, NREL Collaboration is key, and it is hard-wired into the ESIF's core. NREL
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... statistical and other methodological consultation to this collaborative project. Discussion: Grantees under... and technical assistance must be designed to contribute to the following outcomes: (a) Maintenance of... methodological consultation available for research projects that use the BMS Database, as well as site- specific...
Self-Assembling Texts & Courses of Study.
ERIC Educational Resources Information Center
Gibson, David
This paper describes the development of an interoperable meta-database system--a system of applications using metadata--that is intended to facilitate learner-centered collaboration, access to learning resources, and the fitness of channels of information to the emerging needs of learners at both individual and group levels. Highlights include:…
About the Cancer Biomarkers Research Group | Division of Cancer Prevention
The Cancer Biomarkers Research Group promotes research to identify, develop, and validate biological markers for early cancer detection and cancer risk assessment. Activities include development and validation of promising cancer biomarkers, collaborative databases and informatics systems, and new technologies or the refinement of existing technologies. NCI DCP News Note
Memory in the Information Age: New Tools for Second Language Acquisition.
ERIC Educational Resources Information Center
Chapin, Alex
2003-01-01
Describes a Middlebury College second language vocabulary learning database that goes well beyond flashcards, because it keeps track of what students learn. Discusses further expansion of the system through collaborative filtering software to establish learner profiles. A learner profile could then be used to create instructional materials just…
ERIC Educational Resources Information Center
Parker, Ben; Turner, William
2014-01-01
Objective: To assess the effectiveness of psychoanalytic/psychodynamic psychotherapy for children and adolescents who have been sexually abused. Method: The Cochrane Collaboration's criteria for data synthesis and study quality assessment were used. Electronic bibliographic databases and web searches were used to identify randomized and…
Developments in Transnational Research Linkages: Evidence from U.S. Higher-Education Activity
ERIC Educational Resources Information Center
Koehn, Peter H.
2014-01-01
In our knowledge-driven era, multiple and mutual benefits accrue from transnational research linkages. The article identifies important directions in transnational research collaborations involving U.S. universities revealed by key dimensions of 369 projects profiled on a U.S. higher-education association's database. Project initiators, principal…
A Call for Feminist Research: A Limited Client Perspective
ERIC Educational Resources Information Center
Murray, Kirsten
2006-01-01
Feminist approaches embrace a counselor stance that is both collaborative and supportive, seeking client empowerment. On review of feminist family and couple counseling literature of the past 20 years using several academic databases, no research was found that explored a clients experience of feminist-informed family and couple counseling. The…
Harvesting small trees and forest residues
Bryce J. Stokes
1992-01-01
Eight countries collaborated and shared technical information on the harvesting of small trees and forest residues in a three year program. Proceedings and reports from workshops and reviews are summarized in a review of activities and harvesting systems of the participating countries. Four databases were developed for harvesting and transportation of these materials...
Intersubjectivity in Primary and Secondary Education: A Review Study
ERIC Educational Resources Information Center
Beraldo, Rossana Mary Fujarra; Ligorio, M. Beatrice; Barbato, Silviane
2018-01-01
In this literature review on the dynamics of intersubjectivity in primary and secondary education, we summarise and examine articles published in the last 10 years. This concept is considered relevant in education, in particular to enhance different types of collaborative learning situations. The articles were selected from several databases, all…
Real-World Examples: Developing a Departmental Alumni Network
ERIC Educational Resources Information Center
Ashline, George
2017-01-01
We describe the context for and implementation of a departmental alumni network. More than a database compiling facts about graduates, this network provides students with information and inspiration. It also offers a wonderful opportunity to support lifelong learning through the development of collaborative relationships between alumni and faculty…
Multi-Disciplinary Learning through a Database Development Project
ERIC Educational Resources Information Center
Ng, Vincent; Lau, Chloe; Shum, Pearl
2012-01-01
Recently, there are many good examples of how multi-disciplinary learning can support students to learn collaboratively and not solely focus on a single professional sector. During the Fall 2011 and Spring 2012 semesters, we have attempted to gather students studying different professional domains together. Students from the Department of…
Cracking the Egg: The South Carolina Digital Library's New Perspective
ERIC Educational Resources Information Center
Vinson, Christopher G.; Boyd, Kate Foster
2008-01-01
This article explores the historical foundations of the South Carolina Digital Library, a collaborative statewide program that ties together academic special collections and archives, public libraries, state government archives, and other cultural resource institutions in an effort to provide the state with a comprehensive database of online…
Developmental toxicity is one of the most important non-cancer endpoints for both environmental and human health. Despite the fact that numerous developmental studies are being conducted, as required for regulatory decisions, there are not yet sufficient data available to develop...
Consortial IT Services: Collaborating To Reduce the Pain.
ERIC Educational Resources Information Center
Klonoski, Ed
The Connecticut Distance Learning Consortium (CTDLC) provides its 32 members with Information Technologies (IT) services including a portal Web site, course management software, course hosting and development, faculty training, a help desk, online assessment, and a student financial aid database. These services are supplied to two- and four-year…
Valentijn, Pim P; Bruijnzeels, Marc A; de Leeuw, Rob J; Schrijvers, Guus J.P
2012-01-01
Purpose Capacity problems and political pressures have led to a rapid change in the organization of primary care from mono disciplinary small business to complex inter-organizational relationships. It is assumed that inter-organizational collaboration is the driving force to achieve integrated (primary) care. Despite the importance of collaboration and integration of services in primary care, there is no unambiguous definition for both concepts. The purpose of this study is to examine and link the conceptualisation and validation of the terms inter-organizational collaboration and integrated primary care using a theoretical framework. Theory The theoretical framework is based on the complex collaboration process of negotiation among multiple stakeholder groups in primary care. Methods A literature review of health sciences and business databases, and targeted grey literature sources. Based on the literature review we operationalized the constructs of inter-organizational collaboration and integrated primary care in a theoretical framework. The framework is being validated in an explorative study of 80 primary care projects in the Netherlands. Results and conclusions Integrated primary care is considered as a multidimensional construct based on a continuum of integration, extending from segregation to integration. The synthesis of the current theories and concepts of inter-organizational collaboration is insufficient to deal with the complexity of collaborative issues in primary care. One coherent and integrated theoretical framework was found that could make the complex collaboration process in primary care transparent. This study presented theoretical framework is a first step to understand the patterns of successful collaboration and integration in primary care services. These patterns can give insights in the organization forms needed to create a good working integrated (primary) care system that fits the local needs of a population. Preliminary data of the patterns of collaboration and integration will be presented.
Kennedy, Amy E.; Khoury, Muin J.; Ioannidis, John P.A.; Brotzman, Michelle; Miller, Amy; Lane, Crystal; Lai, Gabriel Y.; Rogers, Scott D.; Harvey, Chinonye; Elena, Joanne W.; Seminara, Daniela
2017-01-01
Background We report on the establishment of a web-based Cancer Epidemiology Descriptive Cohort Database (CEDCD). The CEDCD’s goals are to enhance awareness of resources, facilitate interdisciplinary research collaborations, and support existing cohorts for the study of cancer-related outcomes. Methods Comprehensive descriptive data were collected from large cohorts established to study cancer as primary outcome using a newly developed questionnaire. These included an inventory of baseline and follow-up data, biospecimens, genomics, policies, and protocols. Additional descriptive data extracted from publicly available sources were also collected. This information was entered in a searchable and publicly accessible database. We summarized the descriptive data across cohorts and reported the characteristics of this resource. Results As of December 2015, the CEDCD includes data from 46 cohorts representing more than 6.5 million individuals (29% ethnic/racial minorities). Overall, 78% of the cohorts have collected blood at least once, 57% at multiple time points, and 46% collected tissue samples. Genotyping has been performed by 67% of the cohorts, while 46% have performed whole-genome or exome sequencing in subsets of enrolled individuals. Information on medical conditions other than cancer has been collected in more than 50% of the cohorts. More than 600,000 incident cancer cases and more than 40,000 prevalent cases are reported, with 24 cancer sites represented. Conclusions The CEDCD assembles detailed descriptive information on a large number of cancer cohorts in a searchable database. Impact Information from the CEDCD may assist the interdisciplinary research community by facilitating identification of well-established population resources and large-scale collaborative and integrative research. PMID:27439404
Bull, Janet; Zafar, S Yousuf; Wheeler, Jane L; Harker, Matthew; Gblokpor, Agbessi; Hanson, Laura; Hulihan, Deirdre; Nugent, Rikki; Morris, John; Abernethy, Amy P
2010-08-01
Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.
Virtual Atomic and Molecular Data Center (VAMDC) and Stark-B Database
NASA Astrophysics Data System (ADS)
Dimitrijevic, M. S.; Sahal-Brechot, S.; Kovacevic, A.; Jevremovic, D.; Popovic, L. C.; VAMDC Consortium; Dubernet, Marie-Lise
2012-01-01
Virtual Atomic and Molecular Data Center (VAMDC) is an European FP7 project with aims to build a flexible and interoperable e-science environment based interface to the existing Atomic and Molecular data. The VAMDC will be built upon the expertise of existing Atomic and Molecular databases, data producers and service providers with the specific aim of creating an infrastructure that is easily tuned to the requirements of a wide variety of users in academic, governmental, industrial or public communities. In VAMDC will enter also STARK-B database, containing Stark broadening parameters for a large number of lines, obtained by the semiclassical perturbation method during more than 30 years of collaboration of authors of this work (MSD and SSB) and their co-workers. In this contribution we will review the VAMDC project, STARK-B database and discuss the benefits of both for the corresponding data users.
Increasing access to Latin American social medicine resources: a preliminary report.
Buchanan, Holly Shipp; Waitzkin, Howard; Eldredge, Jonathan; Davidson, Russ; Iriart, Celia; Teal, Janis
2003-10-01
This preliminary report describes the development and implementation of a project to improve access to literature in Latin American social medicine (LASM). The University of New Mexico project team collaborated with participants from Argentina, Brazil, Chile, and Ecuador to identify approximately 400 articles and books in Latin American social medicine. Structured abstracts were prepared, translated into English, Spanish, and Portuguese, assigned Medical Subject Headings (MeSH), and loaded into a Web-based database for public searching. The project has initiated Web-based publication for two LASM journals. Evaluation included measures of use and content. The LASM Website (http://hsc.unm.edu/lasm) and database create access to formerly little-known literature that addresses problems relevant to current medicine and public health. This Website offers a unique resource for researchers, practitioners, and teachers who seek to understand the links between socioeconomic conditions and health. The project provides a model for collaboration between librarians and health care providers. Challenges included procurement of primary material; preparation of concise abstracts; working with trilingual translations of abstracts, metadata, and indexing; and the work processes of the multidisciplinary team. The literature of Latin American social medicine has become more readily available to researchers worldwide. The LASM project serves as a collaborative model for the creation of sustainable solutions for disseminating information that is difficult to access through traditional methods.
Increasing access to Latin American social medicine resources: a preliminary report*
Buchanan, Holly Shipp; Waitzkin, Howard; Eldredge, Jonathan; Davidson, Russ; Iriart, Celia; Teal, Janis
2003-01-01
Purpose: This preliminary report describes the development and implementation of a project to improve access to literature in Latin American social medicine (LASM). Methods: The University of New Mexico project team collaborated with participants from Argentina, Brazil, Chile, and Ecuador to identify approximately 400 articles and books in Latin American social medicine. Structured abstracts were prepared, translated into English, Spanish, and Portuguese, assigned Medical Subject Headings (MeSH), and loaded into a Web-based database for public searching. The project has initiated Web-based publication for two LASM journals. Evaluation included measures of use and content. Results: The LASM Website (http://hsc.unm.edu/lasm) and database create access to formerly little-known literature that addresses problems relevant to current medicine and public health. This Website offers a unique resource for researchers, practitioners, and teachers who seek to understand the links between socioeconomic conditions and health. The project provides a model for collaboration between librarians and health care providers. Challenges included procurement of primary material; preparation of concise abstracts; working with trilingual translations of abstracts, metadata, and indexing; and the work processes of the multidisciplinary team. Conclusions: The literature of Latin American social medicine has become more readily available to researchers worldwide. The LASM project serves as a collaborative model for the creation of sustainable solutions for disseminating information that is difficult to access through traditional methods. PMID:14566372
The GHEP–EMPOP collaboration on mtDNA population data—A new resource for forensic casework
Prieto, L.; Zimmermann, B.; Goios, A.; Rodriguez-Monge, A.; Paneto, G.G.; Alves, C.; Alonso, A.; Fridman, C.; Cardoso, S.; Lima, G.; Anjos, M.J.; Whittle, M.R.; Montesino, M.; Cicarelli, R.M.B.; Rocha, A.M.; Albarrán, C.; de Pancorbo, M.M.; Pinheiro, M.F.; Carvalho, M.; Sumita, D.R.; Parson, W.
2011-01-01
Mitochondrial DNA (mtDNA) population data for forensic purposes are still scarce for some populations, which may limit the evaluation of forensic evidence especially when the rarity of a haplotype needs to be determined in a database search. In order to improve the collection of mtDNA lineages from the Iberian and South American subcontinents, we here report the results of a collaborative study involving nine laboratories from the Spanish and Portuguese Speaking Working Group of the International Society for Forensic Genetics (GHEP-ISFG) and EMPOP. The individual laboratories contributed population data that were generated throughout the past 10 years, but in the majority of cases have not been made available to the scientific community. A total of 1019 haplotypes from Iberia (Basque Country, 2 general Spanish populations, 2 North and 1 Central Portugal populations), and Latin America (3 populations from São Paulo) were collected, reviewed and harmonized according to defined EMPOP criteria. The majority of data ambiguities that were found during the reviewing process (41 in total) were transcription errors confirming that the documentation process is still the most error-prone stage in reporting mtDNA population data, especially when performed manually. This GHEP–EMPOP collaboration has significantly improved the quality of the individual mtDNA datasets and adds mtDNA population data as valuable resource to the EMPOP database (www.empop.org). PMID:21075696
A prototype molecular interactive collaborative environment (MICE).
Bourne, P; Gribskov, M; Johnson, G; Moreland, J; Wavra, S; Weissig, H
1998-01-01
Illustrations of macromolecular structure in the scientific literature contain a high level of semantic content through which the authors convey, among other features, the biological function of that macromolecule. We refer to these illustrations as molecular scenes. Such scenes, if available electronically, are not readily accessible for further interactive interrogation. The basic PDB format does not retain features of the scene; formats like PostScript retain the scene but are not interactive; and the many formats used by individual graphics programs, while capable of reproducing the scene, are neither interchangeable nor can they be stored in a database and queried for features of the scene. MICE defines a Molecular Scene Description Language (MSDL) which allows scenes to be stored in a relational database (a molecular scene gallery) and queried. Scenes retrieved from the gallery are rendered in Virtual Reality Modeling Language (VRML) and currently displayed in WebView, a VRML browser modified to support the Virtual Reality Behavior System (VRBS) protocol. VRBS provides communication between multiple client browsers, each capable of manipulating the scene. This level of collaboration works well over standard Internet connections and holds promise for collaborative research at a distance and distance learning. Further, via VRBS, the VRML world can be used as a visual cue to trigger an application such as a remote MEME search. MICE is very much work in progress. Current work seeks to replace WebView with Netscape, Cosmoplayer, a standard VRML plug-in, and a Java-based console. The console consists of a generic kernel suitable for multiple collaborative applications and additional application-specific controls. Further details of the MICE project are available at http:/(/)mice.sdsc.edu.
Databases as policy instruments. About extending networks as evidence-based policy.
de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland
2007-12-07
This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.
Schendel, Diana E; Bresnahan, Michaeline; Carter, Kim W; Francis, Richard W; Gissler, Mika; Grønborg, Therese K; Gross, Raz; Gunnes, Nina; Hornig, Mady; Hultman, Christina M; Langridge, Amanda; Lauritsen, Marlene B; Leonard, Helen; Parner, Erik T; Reichenberg, Abraham; Sandin, Sven; Sourander, Andre; Stoltenberg, Camilla; Suominen, Auli; Surén, Pål; Susser, Ezra
2013-11-01
The International Collaboration for Autism Registry Epidemiology (iCARE) is the first multinational research consortium (Australia, Denmark, Finland, Israel, Norway, Sweden, USA) to promote research in autism geographical and temporal heterogeneity, phenotype, family and life course patterns, and etiology. iCARE devised solutions to challenges in multinational collaboration concerning data access security, confidentiality and management. Data are obtained by integrating existing national or state-wide, population-based, individual-level data systems and undergo rigorous harmonization and quality control processes. Analyses are performed using database federation via a computational infrastructure with a secure, web-based, interface. iCARE provides a unique, unprecedented resource in autism research that will significantly enhance the ability to detect environmental and genetic contributions to the causes and life course of autism.
NASA Astrophysics Data System (ADS)
Feng, Guang; Li, Hengjian; Dong, Jiwen; Chen, Xi; Yang, Huiru
2018-04-01
In this paper, we proposed a joint and collaborative representation with Volterra kernel convolution feature (JCRVK) for face recognition. Firstly, the candidate face images are divided into sub-blocks in the equal size. The blocks are extracted feature using the two-dimensional Voltera kernels discriminant analysis, which can better capture the discrimination information from the different faces. Next, the proposed joint and collaborative representation is employed to optimize and classify the local Volterra kernels features (JCR-VK) individually. JCR-VK is very efficiently for its implementation only depending on matrix multiplication. Finally, recognition is completed by using the majority voting principle. Extensive experiments on the Extended Yale B and AR face databases are conducted, and the results show that the proposed approach can outperform other recently presented similar dictionary algorithms on recognition accuracy.
Accelerating Cancer Systems Biology Research through Semantic Web Technology
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.
2012-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758
Afzelius, Maria; Östman, Margareta; Råstam, Maria; Priebe, Gisela
2018-01-01
A parental mental illness affects all family members and should warrant a need for support. To investigate the extent to which psychiatric patients with underage children are the recipients of child-focused interventions and involved in interagency collaboration. Data were retrieved from a psychiatric services medical record database consisting of data regarding 29,972 individuals in southern Sweden and indicating the patients' main diagnoses, comorbidity, children below the age of 18, and child-focused interventions. Among the patients surveyed, 12.9% had registered underage children. One-fourth of the patients received child-focused interventions from adult psychiatry, and out of these 30.7% were involved in interagency collaboration as compared to 7.7% without child-focused interventions. Overall, collaboration with child and adolescent psychiatric services was low for all main diagnoses. If a patient received child-focused interventions from psychiatric services, the likelihood of being involved in interagency collaboration was five times greater as compared to patients receiving no child-focused intervention when controlled for gender, main diagnosis, and inpatient care. Psychiatric services play a significant role in identifying the need for and initiating child-focused interventions in families with a parental mental illness, and need to develop and support strategies to enhance interagency collaboration with other welfare services.
Dynamics of a Global Zoonotic Research Network Over 33 Years (1980-2012).
Hossain, Liaquat; Karimi, Faezeh; Wigand, Rolf T
2015-10-01
The increasing rate of outbreaks in humans of zoonotic diseases requires detailed examination of the education, research, and practice of animal health and its connection to human health. This study investigated the collaboration network of different fields engaged in conducting zoonotic research from a transdisciplinary perspective. Examination of the dynamics of this network for a 33-year period from 1980 to 2012 is presented through the development of a large scientometric database from Scopus. In our analyses we compared several properties of these networks, including density, clustering coefficient, giant component, and centrality measures over time. We also elicited patterns in different fields of study collaborating with various other fields for zoonotic research. We discovered that the strongest collaborations across disciplines are formed among the fields of medicine; biochemistry, genetics, and molecular biology; immunology and microbiology; veterinary; agricultural and biological sciences; and social sciences. Furthermore, the affiliation network is growing overall in terms of collaborative research among different fields of study such that more than two-thirds of all possible collaboration links among disciplines have already been formed. Our findings indicate that zoonotic research scientists in different fields (human or animal health, social science, earth and environmental sciences, engineering) have been actively collaborating with each other over the past 11 years.
Accelerating cancer systems biology research through Semantic Web technology.
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S
2013-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.
LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei
2017-01-01
Background: We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. Methods: We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. Results: For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Conclusion: Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers. PMID:28828316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
Re-thinking organisms: The impact of databases on model organism biology.
Leonelli, Sabina; Ankeny, Rachel A
2012-03-01
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vileikis, O.; Escalante Carrillo, E.; Allayarov, S.; Feyzulayev, A.
2017-08-01
The historic cities of Uzbekistan are an irreplaceable legacy of the Silk Roads. Currently, Uzbekistan counts with four UNESCO World Heritage Properties, with hundreds of historic monuments and traditional historic houses. However, lack of documentation, systematic monitoring and a digital database, of the historic buildings and dwellings within the historic centers, are threatening the World Heritage properties and delaying the development of a proper management mechanism for the preservation of the heritage and an interwoven city urban development. Unlike the monuments, the traditional historic houses are being demolished without any enforced legal protection, leaving no documentation to understand the city history and its urban fabric as well of way of life, traditions and customs over the past centuries. To fill out this gap, from 2008 to 2015, the Principal Department for Preservation and Utilization of Cultural Objects of the Ministry of Culture and Sports of Uzbekistan with support from the UNESCO Office in Tashkent, and in collaboration with several international and local universities and institutions, carried out a survey of the Historic Centre of Bukhara, Itchan Kala and Samarkand Crossroad of Cultures. The collaborative work along these years have helped to consolidate a methodology and to integrate a GIS database that is currently contributing to the understanding of the outstanding heritage values of these cities as well as to develop preservation and management strategies with a solid base of heritage documentation.
Jain, Anubhav; Persson, Kristin A.; Ceder, Gerbrand
2016-03-24
Materials innovations enable new technological capabilities and drive major societal advancements but have historically required long and costly development cycles. The Materials Genome Initiative (MGI) aims to greatly reduce this time and cost. Here, we focus on data reuse in the MGI and, in particular, discuss the impact of three different computational databases based on density functional theory methods to the research community. Finally, we discuss and provide recommendations on technical aspects of data reuse, outline remaining fundamental challenges, and present an outlook on the future of MGI's vision of data sharing.
Marion, Stéphanie B; Thorley, Craig
2016-11-01
The retrieval strategy disruption hypothesis (Basden, Basden, Bryner, & Thomas, 1997) is the most widely cited theoretical explanation for why the memory performance of collaborative groups is inferior to the pooled performance of individual group members remembering alone (i.e., collaborative inhibition). This theory also predicts that several variables will moderate collaborative inhibition. This meta-analysis tests the veracity of the theory by systematically examining whether or not these variables do moderate the presence and strength of collaborative inhibition. A total of 75 effect sizes from 64 studies were included in the analysis. Collaborative inhibition was found to be a robust effect. Moreover, it was enhanced when remembering took place in larger groups, when uncategorized content items were retrieved, when group members followed free-flowing and free-order procedures, and when group members did not know one another. These findings support the retrieval strategy disruption hypothesis as a general theoretical explanation for the collaborative inhibition effect. Several additional analyses were also conducted to elucidate the potential contributions of other cognitive mechanisms to collaborative inhibition. Some results suggest that a contribution of retrieval inhibition is possible, but we failed to find any evidence to suggest retrieval blocking and encoding specificity impact upon collaborative inhibition effects. In a separate analysis (27 effect sizes), moderating factors of postcollaborative memory performance were examined. Generally, collaborative remembering tends to benefit later individual retrieval. Moderator analyses suggest that reexposure to study material may be partly responsible for this postcollaborative memory enhancement. Some applied implications of the meta-analyses are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Barber, Sarah J; Harris, Celia B; Rajaram, Suparna
2015-03-01
Although a group of people working together remembers more than any one individual, they recall less than their predicted potential. This finding is known as collaborative inhibition and is generally thought to arise due to retrieval disruption. However, there is growing evidence that is inconsistent with the retrieval disruption account, suggesting that additional mechanisms also contribute to collaborative inhibition. In the current studies, we examined 2 alternate mechanisms: retrieval inhibition and retrieval blocking. To identify the contributions of retrieval disruption, retrieval inhibition, and retrieval blocking, we tested how collaborative recall of entirely unshared information influences subsequent individual recall and individual recognition memory. If collaborative inhibition is due solely to retrieval disruption, then there should be a release from the negative effects of collaboration on subsequent individual recall and recognition tests. If it is due to retrieval inhibition, then the negative effects of collaboration should persist on both individual recall and recognition memory tests. Finally, if it is due to retrieval blocking, then the impairment should persist on subsequent individual free recall, but not recognition, tests. Novel to the current study, results suggest that retrieval inhibition plays a role in the collaborative inhibition effect. The negative effects of collaboration persisted on a subsequent, always-individual, free-recall test (Experiment 1) and also on a subsequent, always-individual, recognition test (Experiment 2). However, consistent with the retrieval disruption account, this deficit was attenuated (Experiment 1). Together, these results suggest that, in addition to retrieval disruption, multiple mechanisms play a role in collaborative inhibition. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Berg, Cynthia A; Smith, Timothy W; Ko, Kelly J; Beveridge, Ryan M; Story, Nathan; Henry, Nancy J M; Florsheim, Paul; Pearce, Gale; Uchino, Bert N; Skinner, Michelle A; Glazer, Kelly
2007-09-01
Collaborative problem solving may be used by older couples to optimize cognitive functioning, with some suggestion that older couples exhibit greater collaborative expertise. The study explored age differences in 2 aspects of collaborative expertise: spouses' knowledge of their own and their spouse's cognitive abilities and the ability to fit task control to these cognitive abilities. The participants were 300 middle-aged and older couples who completed a hypothetical errand task. The interactions were coded for control asserted by husbands and wives. Fluid intelligence was assessed, and spouses rated their own and their spouse's cognitive abilities. The results revealed no age differences in couple expertise, either in the ability to predict their own and their spouse's cognitive abilities or in the ability to fit task control to abilities. However, gender differences were found. Women fit task control to their own and their spouse's cognitive abilities; men only fit task control to their spouse's cognitive abilities. For women only, the fit between control and abilities was associated with better performance. The results indicate no age differences in couple expertise but point to gender as a factor in optimal collaboration. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Lynx: a database and knowledge extraction engine for integrative medicine
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788
Outreach and online training services at the Saccharomyces Genome Database.
MacPherson, Kevin A; Starr, Barry; Wong, Edith D; Dalusag, Kyla S; Hellerstedt, Sage T; Lang, Olivia W; Nash, Robert S; Skrzypek, Marek S; Engel, Stacia R; Cherry, J Michael
2017-01-01
The Saccharomyces Genome Database (SGD; www.yeastgenome.org ), the primary genetics and genomics resource for the budding yeast S. cerevisiae , provides free public access to expertly curated information about the yeast genome and its gene products. As the central hub for the yeast research community, SGD engages in a variety of social outreach efforts to inform our users about new developments, promote collaboration, increase public awareness of the importance of yeast to biomedical research, and facilitate scientific discovery. Here we describe these various outreach methods, from networking at scientific conferences to the use of online media such as blog posts and webinars, and include our perspectives on the benefits provided by outreach activities for model organism databases. http://www.yeastgenome.org. © The Author(s) 2017. Published by Oxford University Press.
The Evaluation of Forms of Assessment Using N-Dimensional Filtering
ERIC Educational Resources Information Center
Dron, Jon; Boyne, Chris; Mitchell, Richard
2004-01-01
This paper describes the use of the CoFIND (Collaborative Filter in N Dimensions) system to evaluate two assessment styles. CoFIND is a resource database which organizes itself around its users' needs. Learners enter resources, categorize, then rate them using "qualities," aspects of resources which learners find worthwhile, the n dimensions of…
2014-01-01
unprecedented efficiencies in global busi- ness collaboration through communication, information distribution, and fast electronic monetary transactions...tudes (which peaks in free electron density at 300–400 km but extends to just above 1,000 km). At GEO, surface charging occurs intermit - tently
Children's Concerns about Their Parents' Health and Well-Being: Researching with ChildLine Scotland
ERIC Educational Resources Information Center
Backett-Milburn, Kathryn; Jackson, Sharon
2012-01-01
This paper reports on collaborative research conducted with ChildLine Scotland, a free, confidential, telephone counselling service, using their database. We focussed on children's calls about parental health and well-being and how this affected their own lives. Children's concerns emerged within multi-layered calls in which they discussed…
Diet History Questionnaire: Canadian Version
The Diet History Questionnaire (DHQ) and the DHQ nutrient database were modified for use in Canada through the collaborative efforts of Dr. Amy Subar and staff at the Risk Factor Monitoring and Methods Branch, and Dr. Ilona Csizmadi and colleagues in the Division of Population Health and Information at the Alberta Cancer Board in Canada.
About BTTC | Center for Cancer Research
About Combined Forces Drive BTTC The Brain Tumor Trials Collaborative (BTTC) was created in 2003 - a combined effort of many professionals, entities and organizations to help those suffering from brain tumors. The National Cancer Institute's (NCI) Center for Cancer Research serves as the lead institution and provides the administrative infrastructure, clinical database and
Improving Collaborative School-Agency Transition Planning: A Statewide DBMS Approach.
ERIC Educational Resources Information Center
Peterson, Randolph L.; Roessler, Richard T.
1997-01-01
Describes the development and components of a referral database management system developed by the Arkansas Transition Project. The system enables individualized-education-plan team members to refer students with disabilities directly to adult agencies and to receive a monitoring report describing the agency response to the referral. The system is…
ERIC Educational Resources Information Center
Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda
2010-01-01
At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…
The 21st Century Writing Program: Collaboration for the Common Good
ERIC Educational Resources Information Center
Moberg, Eric
2010-01-01
The purpose of this report is to review the literature on theoretical frameworks, best practices, and conceptual models for the 21st century collegiate writing program. Methods include electronic database searches for recent and historical peer-reviewed scholarly literature on collegiate writing programs. The author analyzed over 65 sources from…
Collaborative SDOCT Segmentation and Analysis Software.
Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J
2017-02-01
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Yu, Hwan-Jeu; Lai, Hong-Shiee; Chen, Kuo-Hsin; Chou, Hsien-Cheng; Wu, Jin-Ming; Dorjgochoo, Sarangerel; Mendjargal, Adilsaikhan; Altangerel, Erdenebaatar; Tien, Yu-Wen; Hsueh, Chih-Wen; Lai, Feipei
2013-08-01
Pancreaticoduodenectomy (PD) is a major operation with high complication rate. Thereafter, patients may develop morbidity because of the complex reconstruction and loss of pancreatic parenchyma. A well-designed database is very important to address both the short-term and long-term outcomes after PD. The objective of this research was to build an international PD database implemented with security and clinical rule supporting functions, which made the data-sharing easier and improve the accuracy of data. The proposed system is a cloud-based application. To fulfill its requirements, the system comprises four subsystems: a data management subsystem, a clinical rule supporting subsystem, a short message notification subsystem, and an information security subsystem. After completing the surgery, the physicians input the data retrospectively, which are analyzed to study factors associated with post-PD common complications (delayed gastric emptying and pancreatic fistula) to validate the clinical value of this system. Currently, this database contains data from nearly 500 subjects. Five medical centers in Taiwan and two cancer centers in Mongolia are participating in this study. A data mining model of the decision tree analysis showed that elderly patients (>76 years) with pylorus-preserving PD (PPPD) have higher proportion of delayed gastric emptying. About the pancreatic fistula, the data mining model of the decision tree analysis revealed that cases with non-pancreaticogastrostomy (PG) reconstruction - body mass index (BMI)>29.65 or PG reconstruction - BMI>23.7 - non-classic PD have higher proportion of pancreatic fistula after PD. The proposed system allows medical staff to collect and store clinical data in a cloud, sharing the data with other physicians in a secure manner to achieve collaboration in research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
2009-01-01
Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505
NASA Astrophysics Data System (ADS)
Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.
2002-11-01
The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.
Goldacre, Ben; Gray, Jonathan
2016-04-08
OpenTrials is a collaborative and open database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial: registry entries; links, abstracts, or texts of academic journal papers; portions of regulatory documents describing individual trials; structured data on methods and results extracted by systematic reviewers or other researchers; clinical study reports; and additional documents such as blank consent forms, blank case report forms, and protocols. The intention is to create an open, freely re-usable index of all such information and to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive up standards around open data in evidence-based medicine. The project has phase I funding. This will allow us to create a practical data schema and populate the database initially through web-scraping, basic record linkage techniques, crowd-sourced curation around selected drug areas, and import of existing sources of structured and documents. It will also allow us to create user-friendly web interfaces onto the data and conduct user engagement workshops to optimise the database and interface designs. Where other projects have set out to manually and perfectly curate a narrow range of information on a smaller number of trials, we aim to use a broader range of techniques and attempt to match a very large quantity of information on all trials. We are currently seeking feedback and additional sources of structured data.
Kennedy, Amy E; Khoury, Muin J; Ioannidis, John P A; Brotzman, Michelle; Miller, Amy; Lane, Crystal; Lai, Gabriel Y; Rogers, Scott D; Harvey, Chinonye; Elena, Joanne W; Seminara, Daniela
2016-10-01
We report on the establishment of a web-based Cancer Epidemiology Descriptive Cohort Database (CEDCD). The CEDCD's goals are to enhance awareness of resources, facilitate interdisciplinary research collaborations, and support existing cohorts for the study of cancer-related outcomes. Comprehensive descriptive data were collected from large cohorts established to study cancer as primary outcome using a newly developed questionnaire. These included an inventory of baseline and follow-up data, biospecimens, genomics, policies, and protocols. Additional descriptive data extracted from publicly available sources were also collected. This information was entered in a searchable and publicly accessible database. We summarized the descriptive data across cohorts and reported the characteristics of this resource. As of December 2015, the CEDCD includes data from 46 cohorts representing more than 6.5 million individuals (29% ethnic/racial minorities). Overall, 78% of the cohorts have collected blood at least once, 57% at multiple time points, and 46% collected tissue samples. Genotyping has been performed by 67% of the cohorts, while 46% have performed whole-genome or exome sequencing in subsets of enrolled individuals. Information on medical conditions other than cancer has been collected in more than 50% of the cohorts. More than 600,000 incident cancer cases and more than 40,000 prevalent cases are reported, with 24 cancer sites represented. The CEDCD assembles detailed descriptive information on a large number of cancer cohorts in a searchable database. Information from the CEDCD may assist the interdisciplinary research community by facilitating identification of well-established population resources and large-scale collaborative and integrative research. Cancer Epidemiol Biomarkers Prev; 25(10); 1392-401. ©2016 AACR. ©2016 American Association for Cancer Research.
Enhancements to the NASA Astrophysics Science Information and Abstract Service
NASA Astrophysics Data System (ADS)
Kurtz, M. J.; Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Murray, S. S.
1995-05-01
The NASA Astrophysics Data System Astrophysics Science Information and Abstract Service, the extension of the ADS Abstract Service continues rapidly to expand in both use and capabilities. Each month the service is used by about 4,000 different people, and returns about 1,000,000 pieces of bibliographic information. Among the recent additions to the system are: 1. Whole Text Access. In addition to the ApJ Letters we now have whole text for the ApJ on-line, soon we will have AJ and Rev. Mexicana. Discussions with other publishers are in progress. 2. Space Instrumentation Database. We now provide a second abstract service, covering papers related to space instruments. This is larger than the astronomy and astrophysics database in terms of total abstracts. 3. Reference Books and Historical Journals. We have begun putting the SAO Annals and the HCO Annals on-line. We have put the Handbook of Space Astronomy and Astrophysics by M.V. Zombeck (Cambridge U.P.) on-line. 4. Author Abstracts. We can now include original abstracts in addition to those we get from the NASA STI Abstracts Database. We have included abstracts for A&A in collaboration with the CDS in Strasbourg, and are collaborating with the AAS and the ASP on others. We invite publishers and editors of journals and conference proceedings to include their original abstracts in our service; send inquiries via e-mail to ads@cfa.harvard.edu. 5. Author Notes. We now accept notes and comments from authors of articles in our database. These are arbitrary html files and may contain pointers to other WWW documents, they are listed along with the abstracts, whole text, and data available in the index listing for every reference. The ASIAS is available at: http://adswww.harvard.edu/
Pert, Petina L; Ens, Emilie J; Locke, John; Clarke, Philip A; Packer, Joanne M; Turpin, Gerry
2015-11-15
With growing international calls for the enhanced involvement of Indigenous peoples and their biocultural knowledge in managing conservation and the sustainable use of physical environment, it is timely to review the available literature and develop cross-cultural approaches to the management of biocultural resources. Online spatial databases are becoming common tools for educating land managers about Indigenous Biocultural Knowledge (IBK), specifically to raise a broad awareness of issues, identify knowledge gaps and opportunities, and to promote collaboration. Here we describe a novel approach to the application of internet and spatial analysis tools that provide an overview of publically available documented Australian IBK (AIBK) and outline the processes used to develop the online resource. By funding an AIBK working group, the Australian Centre for Ecological Analysis and Synthesis (ACEAS) provided a unique opportunity to bring together cross-cultural, cross-disciplinary and trans-organizational contributors who developed these resources. Without such an intentionally collaborative process, this unique tool would not have been developed. The tool developed through this process is derived from a spatial and temporal literature review, case studies and a compilation of methods, as well as other relevant AIBK papers. The online resource illustrates the depth and breadth of documented IBK and identifies opportunities for further work, partnerships and investment for the benefit of not only Indigenous Australians, but all Australians. The database currently includes links to over 1500 publically available IBK documents, of which 568 are geo-referenced and were mapped. It is anticipated that as awareness of the online resource grows, more documents will be provided through the website to build the database. It is envisaged that this will become a well-used tool, integral to future natural and cultural resource management and maintenance. Copyright © 2015. Published by Elsevier B.V.
Using collaborative tools for energy corridor planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuiper, J. A.; Cantwell, B.; Hlohowskyj, I.
2008-01-01
In November 2007, the Draft Programmatic Environmental Impact Statement on Designation of Energy Corridors on Federal Land in the Western 11 States was released. The draft proposes a network of 6055 miles of energy corridors on lands managed by seven different federal agencies. Determining the proposed locations of the corridors was a large collaborative effort among the agencies and included local, state, and federal land managers. To connect this geographically dispersed group of people, the project team employed a variety of approaches to communicate corridor siting issues, including sharing GIS layers and electronic maps, a downloadable GIS database and ArcReadermore » project, workshops, and Internet webcast teleconferences. This collaborative approach allowed difficult siting issues to be understood and discussed from many perspectives, which resulted in rapid and effective decision making. The result was a proposed corridor system that avoids many sensitive resources and protected lands while accommodating expected energy development.« less
Community collaboration as a disaster mental health competency: a systematic literature review.
Lebowitz, Adam Jon
2015-02-01
Disasters impact the mental health of entire communities through destruction and physical displacement. There is growing recognition of the need for disaster mental health competencies. Professional organizations such as the AAFP and the ASPH recommend engaging with communities in equal partnership for their recovery. This systematic study was undertaken for the purpose of reviewing published disaster medicine competencies to determine if core competencies included community cooperation and collaboration. A search of Internet databases was conducted using major keywords "disaster" and "competencies". Articles eligible contained laundry lists of basic core competency curriculum beyond emergency response. Data were qualitatively analyzed to identify types of competencies, and the degree of community cooperation. A total of 12 studies were reviewed. Only one study listed competencies specifying community cooperation, although others refer indirectly to it. Findings suggest competency-based education programs could do more to educate future disaster health professionals about the importance of community collaboration.
The Evolution of Your Success Lies at the Centre of Your Co-Authorship Network
Servia-Rodríguez, Sandra; Noulas, Anastasios; Mascolo, Cecilia; Fernández-Vilas, Ana; Díaz-Redondo, Rebeca P.
2015-01-01
Collaboration among scholars and institutions is progressively becoming essential to the success of research grant procurement and to allow the emergence and evolution of scientific disciplines. Our work focuses on analysing if the volume of collaborations of one author together with the relevance of his collaborators is somewhat related to his research performance over time. In order to prove this relation we collected the temporal distributions of scholars’ publications and citations from the Google Scholar platform and the co-authorship network (of Computer Scientists) underlying the well-known DBLP bibliographic database. By the application of time series clustering, social network analysis and non-parametric statistics, we observe that scholars with similar publications (citations) patterns also tend to have a similar centrality in the co-authorship network. To our knowledge, this is the first work that considers success evolution with respect to co-authorship. PMID:25760732
Technology Commercialization Effects on the Conduct of Research in Higher Education
Powers, Joshua B.; Campbell, Eric G.
2012-01-01
The objective of this study was to investigate the effects of technology commercialization on researcher practice and productivity at U.S. universities. Using data drawn from licensing contract documents and databases of university-industry linkages and faculty research output, the study findings suggest that the common practice of licensing technologies exclusively to singular firms may have a dampening effect on faculty inventor propensity to conduct published research and to collaborate with others on research. Furthermore, faculty who are more actively engaged in patenting may be less likely to collaborate with outsiders on research while faculty at public universities may experience particularly strong norms to engage in commercialization vis-à-vis traditional routes to research dissemination. These circumstances appear to be hindering innovation via the traditional mechanisms (research publication and collaboration), questioning the success of policymaking to date for the purpose of speeding the movement of research from the lab bench to society. PMID:22427717
Menditto, Enrica; Bolufer De Gea, Angela; Cahir, Caitriona; Marengoni, Alessandra; Riegler, Salvatore; Fico, Giuseppe; Costa, Elisio; Monaco, Alessandro; Pecorelli, Sergio; Pani, Luca; Prados-Torres, Alexandra
2016-01-01
Computerized health care databases have been widely described as an excellent opportunity for research. The availability of "big data" has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on "adherence to prescription and medical plans" identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners with the aim of improving data sharing on a European level. A total of six databases belonging to three different European countries (Spain, Republic of Ireland, and Italy) were included in the analysis. Preliminary results suggest that there are some similarities. However, these results should be applied in different contexts and European countries, supporting the idea that large European studies should be designed in order to get the most of already available databases.
The Russian effort in establishing large atomic and molecular databases
NASA Astrophysics Data System (ADS)
Presnyakov, Leonid P.
1998-07-01
The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.
A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A.M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C.A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F.R.; Cirillo, D.; Comerci, V.; Cucci, L.; De Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; Di Manna, P.; Di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J.P.; Ferrarini, F.; Ferrario, M.F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L.C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J.P.; Mariucci, M.T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K.J.W.; Michetti, A.M.; Mildon, Z.K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P.P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G.P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; Van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L.N.J.; Wilkinson, M.; Zambrano, M.
2018-01-01
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting. PMID:29583143
Planetary Data Archiving Plan at JAXA
NASA Astrophysics Data System (ADS)
Shinohara, Iku; Kasaba, Yasumasa; Yamamoto, Yukio; Abe, Masanao; Okada, Tatsuaki; Imamura, Takeshi; Sobue, Shinichi; Takashima, Takeshi; Terazono, Jun-Ya
After the successful rendezvous of Hayabusa with the small-body planet Itokawa, and the successful launch of Kaguya to the moon, Japanese planetary community has gotten their own and full-scale data. However, at this moment, these datasets are only available from the data sites managed by each mission team. The databases are individually constructed in the different formats, and the user interface of these data sites is not compatible with foreign databases. To improve the usability of the planetary archives at JAXA and to enable the international data exchange smooth, we are investigating to make a new planetary database. Within a coming decade, Japan will have fruitful datasets in the planetary science field, Venus (Planet-C), Mercury (BepiColombo), and several missions in planning phase (small-bodies). In order to strongly assist the international scientific collaboration using these mission archive data, the planned planetary data archive at JAXA should be managed in an unified manner and the database should be constructed in the international planetary database standard style. In this presentation, we will show the current status and future plans of the planetary data archiving at JAXA.
NASA Astrophysics Data System (ADS)
Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.
2016-03-01
Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.
Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C.L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.
2016-01-01
Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research. PMID:27023900
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco
2018-03-27
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km 2 . The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.
Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H
2016-03-29
Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.
A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy
NASA Astrophysics Data System (ADS)
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A. M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C. A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F. R.; Cirillo, D.; Comerci, V.; Cucci, L.; de Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; di Manna, P.; di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J. P.; Ferrarini, F.; Ferrario, M. F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L. C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J. P.; Mariucci, M. T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K. J. W.; Michetti, A. M.; Mildon, Z. K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P. P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G. P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L. N. J.; Wilkinson, M.; Zambrano, M.
2018-03-01
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.
Bichutskiy, Vadim Y.; Colman, Richard; Brachmann, Rainer K.; Lathrop, Richard H.
2006-01-01
Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.) PMID:19458771
Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Oral cancer databases: A comprehensive review.
Sarode, Gargi S; Sarode, Sachin C; Maniyar, Nikunj; Anand, Rahul; Patil, Shankargouda
2017-11-29
Cancer database is a systematic collection and analysis of information on various human cancers at genomic and molecular level that can be utilized to understand various steps in carcinogenesis and for therapeutic advancement in cancer field. Oral cancer is one of the leading causes of morbidity and mortality all over the world. The current research efforts in this field are aimed at cancer etiology and therapy. Advanced genomic technologies including microarrays, proteomics, transcrpitomics, and gene sequencing development have culminated in generation of extensive data and subjection of several genes and microRNAs that are distinctively expressed and this information is stored in the form of various databases. Extensive data from various resources have brought the need for collaboration and data sharing to make effective use of this new knowledge. The current review provides comprehensive information of various publicly accessible databases that contain information pertinent to oral squamous cell carcinoma (OSCC) and databases designed exclusively for OSCC. The databases discussed in this paper are Protein-Coding Gene Databases and microRNA Databases. This paper also describes gene overlap in various databases, which will help researchers to reduce redundancy and focus on only those genes, which are common to more than one databases. We hope such introduction will promote awareness and facilitate the usage of these resources in the cancer research community, and researchers can explore the molecular mechanisms involved in the development of cancer, which can help in subsequent crafting of therapeutic strategies. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The Protein Information Resource: an integrated public resource of functional annotation of proteins
Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.
2002-01-01
The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247
Supporting tactical intelligence using collaborative environments and social networking
NASA Astrophysics Data System (ADS)
Wollocko, Arthur B.; Farry, Michael P.; Stark, Robert F.
2013-05-01
Modern military environments place an increased emphasis on the collection and analysis of intelligence at the tactical level. The deployment of analytical tools at the tactical level helps support the Warfighter's need for rapid collection, analysis, and dissemination of intelligence. However, given the lack of experience and staffing at the tactical level, most of the available intelligence is not exploited. Tactical environments are staffed by a new generation of intelligence analysts who are well-versed in modern collaboration environments and social networking. An opportunity exists to enhance tactical intelligence analysis by exploiting these personnel strengths, but is dependent on appropriately designed information sharing technologies. Existing social information sharing technologies enable users to publish information quickly, but do not unite or organize information in a manner that effectively supports intelligence analysis. In this paper, we present an alternative approach to structuring and supporting tactical intelligence analysis that combines the benefits of existing concepts, and provide detail on a prototype system embodying that approach. Since this approach employs familiar collaboration support concepts from social media, it enables new-generation analysts to identify the decision-relevant data scattered among databases and the mental models of other personnel, increasing the timeliness of collaborative analysis. Also, the approach enables analysts to collaborate visually to associate heterogeneous and uncertain data within the intelligence analysis process, increasing the robustness of collaborative analyses. Utilizing this familiar dynamic collaboration environment, we hope to achieve a significant reduction of time and skill required to glean actionable intelligence in these challenging operational environments.
Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.
Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin
2014-02-01
Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.
Sino-Canadian collaborations in stem cell research: a scientometric analysis.
Ali-Khan, Sarah E; Ray, Monali; McMahon, Dominique S; Thorsteinsdóttir, Halla
2013-01-01
International collaboration (IC) is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China's collaboration with developed economies, is lacking. We provide a scientometric analysis of the China-Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China-Canada collaboration is rising steadily, more than doubling during 2006-2010. China-Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English. While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research.
Sino-Canadian Collaborations in Stem Cell Research: A Scientometric Analysis
Ali-Khan, Sarah E.; Ray, Monali; McMahon, Dominique S.; Thorsteinsdóttir, Halla
2013-01-01
Background International collaboration (IC) is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China’s collaboration with developed economies, is lacking. Methods and Findings We provide a scientometric analysis of the China–Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China–Canada collaboration is rising steadily, more than doubling during 2006–2010. China–Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English. Conclusions While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research. PMID:23468927
Content-based video indexing and searching with wavelet transformation
NASA Astrophysics Data System (ADS)
Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah
2006-05-01
Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.
Bigger data, collaborative tools and the future of predictive drug discovery
NASA Astrophysics Data System (ADS)
Ekins, Sean; Clark, Alex M.; Swamidass, S. Joshua; Litterman, Nadia; Williams, Antony J.
2014-10-01
Over the past decade we have seen a growth in the provision of chemistry data and cheminformatics tools as either free websites or software as a service commercial offerings. These have transformed how we find molecule-related data and use such tools in our research. There have also been efforts to improve collaboration between researchers either openly or through secure transactions using commercial tools. A major challenge in the future will be how such databases and software approaches handle larger amounts of data as it accumulates from high throughput screening and enables the user to draw insights, enable predictions and move projects forward. We now discuss how information from some drug discovery datasets can be made more accessible and how privacy of data should not overwhelm the desire to share it at an appropriate time with collaborators. We also discuss additional software tools that could be made available and provide our thoughts on the future of predictive drug discovery in this age of big data. We use some examples from our own research on neglected diseases, collaborations, mobile apps and algorithm development to illustrate these ideas.
Huamaní, Charles; Romaní, Franco; González-Alcaide, Gregorio; Mejia, Miluska O; Ramos, José Manuel; Espinoza, Manuel; Cabezas, César
2014-01-01
Evaluate the production and the research collaborative network on Leishmaniasis in South America. A bibliometric research was carried out using SCOPUS database. The analysis unit was original research articles published from 2000 to 2011, that dealt with leishmaniasis and that included at least one South American author. The following items were obtained for each article: journal name, language, year of publication, number of authors, institutions, countries, and others variables. 3,174 articles were published, 2,272 of them were original articles. 1,160 different institutional signatures, 58 different countries and 398 scientific journals were identified. Brazil was the country with more articles (60.7%) and Oswaldo Cruz Foundation (FIOCRUZ) had 18% of Brazilian production, which is the South American nucleus of the major scientific network in Leishmaniasis. South American scientific production on Leishmaniasis published in journals indexed in SCOPUS is focused on Brazilian research activity. It is necessary to strengthen the collaboration networks. The first step is to identify the institutions with higher production, in order to perform collaborative research according to the priorities of each country.
Collaborating across organizational boundaries to improve the quality of care.
Plsek, P E
1997-04-01
The paradigm of modern quality management is in wide use in health care. Although much of the initial effort in health care has focused on improving service, administrative, and support processes, many organizations are also using these concepts to improve clinical care. The analysis of data on clinical outcomes has undoubtedly led to many local improvements, but such analysis is inevitably limited by three issues: small samples, lack of detailed knowledge of what others are doing, and paradigm paralysis. These issues can be partially overcome when multiple health care organizations work together on focused clinical quality improvement efforts. Through the use of multiorganizational collaborative groups, literature reviews, expert panels, best-practice conferences, multiorganizational databases, and bench-marking groups, organizations can effectively pool data and learn from the many natural experiments constantly underway in the health care community. This article outlines the key concepts behind such collaborative improvement efforts and describes pioneering work in the application of these techniques in health care. A better understanding and wider use of collaborative improvement efforts may lead to dramatic breakthroughs in clinical outcomes in the coming years.
Huamaní, Charles; Romaní, Franco; González-Alcaide, Gregorio; Mejia, Miluska O.; Ramos, José Manuel; Espinoza, Manuel; Cabezas, César
2014-01-01
Objectives: Evaluate the production and the research collaborative network on Leishmaniasis in South America. Methods: A bibliometric research was carried out using SCOPUS database. The analysis unit was original research articles published from 2000 to 2011, that dealt with leishmaniasis and that included at least one South American author. The following items were obtained for each article: journal name, language, year of publication, number of authors, institutions, countries, and others variables. Results: 3,174 articles were published, 2,272 of them were original articles. 1,160 different institutional signatures, 58 different countries and 398 scientific journals were identified. Brazil was the country with more articles (60.7%) and Oswaldo Cruz Foundation (FIOCRUZ) had 18% of Brazilian production, which is the South American nucleus of the major scientific network in Leishmaniasis. Conclusions: South American scientific production on Leishmaniasis published in journals indexed in SCOPUS is focused on Brazilian research activity. It is necessary to strengthen the collaboration networks. The first step is to identify the institutions with higher production, in order to perform collaborative research according to the priorities of each country. PMID:25229217
Impact of mentoring medical students on scholarly productivity.
Svider, Peter F; Husain, Qasim; Mauro, Kevin M; Folbe, Adam J; Baredes, Soly; Eloy, Jean Anderson
2014-02-01
Our objectives were to evaluate collaboration with medical students and other nondoctoral authors, and assess whether mentoring such students influences the academic productivity of senior authors. Six issues of the Laryngoscope and International Forum of Allergy & Rhinology (IFAR) were examined for the corresponding author of each manuscript, and whether any students were involved in authorship. The h-index of all corresponding authors was calculated using the Scopus database to compare the scholarly impact of authors collaborating with students and those collaborating exclusively with other physicians or doctoral-level researchers. Of 261 Laryngoscope manuscripts, 71.6% had exclusively physician or doctoral-level authors, 9.2% had "students" (nondoctoral-level authors) as first authors, and another 19.2% involved "student" authors. Corresponding values for IFAR manuscripts were 57.1%, 6.3%, and 36.5%. Corresponding authors who collaborated with students had higher scholarly impact, as measured by the h-index, than those collaborating exclusively with physicians and doctoral-level scientists in both journals. Collaboration with individuals who do not have doctoral-level degrees, presumably medical students, has a strong association with scholarly impact among researchers publishing in the Laryngoscope and IFAR. Research mentorship of medical students interested in otolaryngology may allow a physician-scientist to evaluate the students' effectiveness and functioning in a team setting, a critical component of success in residency training, and may have beneficial effects on research productivity for the senior author. © 2013 ARS-AAOA, LLC.
Navy MANTECH 2008 Project Book
2008-01-01
work center. The project team kept statistics on the actual operations in that work center and assessed the fidelity of those operations with...base, a power- ful analytic and predictive capability, and a database of validated industry best practices. The BMPCOE- developed Collaborative Work ...Today, the EMPF operates as a national electronics manufacturing COE focused on the development , application and
Brohée, Sylvain; Barriot, Roland; Moreau, Yves
2010-09-01
In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.
GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species
Kumar, Sujai; Stevens, Lewis; Blaxter, Mark
2017-01-01
Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
A Comprehensive Opacities/Atomic Database for the Analysis of Astrophysical Spectra and Modeling
NASA Technical Reports Server (NTRS)
Pradhan, Anil K. (Principal Investigator)
1997-01-01
The main goals of this ADP award have been accomplished. The electronic database TOPBASE, consisting of the large volume of atomic data from the Opacity Project, has been installed and is operative at a NASA site at the Laboratory for High Energy Astrophysics Science Research Center (HEASRC) at the Goddard Space Flight Center. The database will be continually maintained and updated by the PI and collaborators. TOPBASE is publicly accessible from IP: topbase.gsfc.nasa.gov. During the last six months (since the previous progress report), considerable work has been carried out to: (1) put in the new data for low ionization stages of iron: Fe I - V, beginning with Fe II, (2) high-energy photoionization cross sections computed by Dr. Hong Lin Zhang (consultant on the Project) were 'merged' with the current Opacity Project data and input into TOPbase; (3) plans laid out for a further extension of TOPbase to include TIPbase, the database for collisional data to complement the radiative data in TOPbase.
Generation and validation of a universal perinatal database and biospecimen repository: PeriBank.
Antony, K M; Hemarajata, P; Chen, J; Morris, J; Cook, C; Masalas, D; Gedminas, M; Brown, A; Versalovic, J; Aagaard, K
2016-11-01
There is a dearth of biospecimen repositories available to perinatal researchers. In order to address this need, here we describe the methodology used to establish such a resource. With the collaboration of MedSci.net, we generated an online perinatal database with 847 fields of clinical information. Simultaneously, we established a biospecimen repository of the same clinical participants. The demographic and clinical outcomes data are described for the first 10 000 participants enrolled. The demographic characteristics are consistent with the demographics of the delivery hospitals. Quality analysis of the biospecimens reveals variation in very few analytes. Furthermore, since the creation of PeriBank, we have demonstrated validity of the database and tissue integrity of the biospecimen repository. Here we establish that the creation of a universal perinatal database and biospecimen collection is not only possible, but allows for the performance of state-of-the-science translational perinatal research and is a potentially valuable resource to academic perinatal researchers.
Database Design to Ensure Anonymous Study of Medical Errors: A Report from the ASIPS collaborative
Pace, Wilson D.; Staton, Elizabeth W.; Higgins, Gregory S.; Main, Deborah S.; West, David R.; Harris, Daniel M.
2003-01-01
Medical error reporting systems are important information sources for designing strategies to improve the safety of health care. Applied Strategies for Improving Patient Safety (ASIPS) is a multi-institutional, practice-based research project that collects and analyzes data on primary care medical errors and develops interventions to reduce error. The voluntary ASIPS Patient Safety Reporting System captures anonymous and confidential reports of medical errors. Confidential reports, which are quickly de-identified, provide better detail than do anonymous reports; however, concerns exist about the confidentiality of those reports should the database be subject to legal discovery or other security breaches. Standard database elements, for example, serial ID numbers, date/time stamps, and backups, could enable an outsider to link an ASIPS report to a specific medical error. The authors present the design and implementation of a database and administrative system that reduce this risk, facilitate research, and maintain near anonymity of the events, practices, and clinicians. PMID:12925548
NASA Astrophysics Data System (ADS)
Maracle, B. K.; Schuster, P. F.
2008-12-01
The U.S. Geological Survey (USGS) recently concluded a five-year water quality study (2001-2005) of the Yukon River and its major tributaries. One component of the study was to establish a water quality baseline providing a frame of reference to assess changes in the basin that may result from climate change. As the study neared its conclusion, the USGS began to foster a relationship with the Yukon River Inter-Tribal Watershed Council (YRITWC). The YRITWC was in the process of building a steward-based Yukon River water quality program. Both the USGS and the YRITWC recognized the importance of collaboration resulting in mutual benefits. Through the guidance, expertise, and training provided by the USGS, YRITWC developed and implemented a basin-wide water quality program. The YRITWC program began in March, 2006 utilizing USGS protocols, techniques, and in-kind services. To date, more than 300 samplings and field measurements at more than 25 locations throughout the basin (twice the size of California) have been completed by more than 50 trained volunteers. The Yukon River Basin baseline water quality database has been extended from 5 to 8 years due to the efforts of the YRITWC-USGS collaboration. Basic field measurements include field pH, specific conductance, dissolved oxygen, and water temperature. Samples taken for laboratory analyses include major ions, dissolved organic carbon, greenhouse gases, nutrients, and stable isotopes of hydrogen and oxygen, and selected trace elements. Field replicates and blanks were introduced into the program in 2007 for quality assurance. Building toward a long-term dataset is critical to understanding the effects of climate change on river basins. Thus, relaying the importance of long-term water-quality databases is a main focus of the training workshops. Consistencies in data populations between the USGS 5-year database and the YRITWC 3-year database indicate protocols and procedures made a successful transition. This reflects the success of the YRITWC- USGS sponsored water-quality training workshops for water technicians representing more than 18 Tribal Councils and First Nations throughout the Yukon River Basin. The collaborative approach to outreach and education will be described along with discussion of future opportunities using this model.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha
2012-06-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E
2012-01-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Breast Imaging in the Era of Big Data: Structured Reporting and Data Mining.
Margolies, Laurie R; Pandey, Gaurav; Horowitz, Eliot R; Mendelson, David S
2016-02-01
The purpose of this article is to describe structured reporting and the development of large databases for use in data mining in breast imaging. The results of millions of breast imaging examinations are reported with structured tools based on the BI-RADS lexicon. Much of these data are stored in accessible media. Robust computing power creates great opportunity for data scientists and breast imagers to collaborate to improve breast cancer detection and optimize screening algorithms. Data mining can create knowledge, but the questions asked and their complexity require extremely powerful and agile databases. New data technologies can facilitate outcomes research and precision medicine.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
A collaborative computer auditing system under SOA-based conceptual model
NASA Astrophysics Data System (ADS)
Cong, Qiushi; Huang, Zuoming; Hu, Jibing
2013-03-01
Some of the current challenges of computer auditing are the obstacles to retrieving, converting and translating data from different database schema. During the last few years, there are many data exchange standards under continuous development such as Extensible Business Reporting Language (XBRL). These XML document standards can be used for data exchange among companies, financial institutions, and audit firms. However, for many companies, it is still expensive and time-consuming to translate and provide XML messages with commercial application packages, because it is complicated and laborious to search and transform data from thousands of tables in the ERP databases. How to transfer transaction documents for supporting continuous auditing or real time auditing between audit firms and their client companies is a important topic. In this paper, a collaborative computer auditing system under SOA-based conceptual model is proposed. By utilizing the widely used XML document standards and existing data transformation applications developed by different companies and software venders, we can wrap these application as commercial web services that will be easy implemented under the forthcoming application environments: service-oriented architecture (SOA). Under the SOA environments, the multiagency mechanism will help the maturity and popularity of data assurance service over the Internet. By the wrapping of data transformation components with heterogeneous databases or platforms, it will create new component markets composed by many software vendors and assurance service companies to provide data assurance services for audit firms, regulators or third parties.
Grid-enabled measures: using Science 2.0 to standardize measures and share data.
Moser, Richard P; Hesse, Bradford W; Shaikh, Abdul R; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry Y; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-05-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment--a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute (NCI) with two overarching goals: (1) promote the use of standardized measures, which are tied to theoretically based constructs; and (2) facilitate the ability to share harmonized data resulting from the use of standardized measures. The first is accomplished by creating an online venue where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting on, and viewing meta-data about the measures and associated constructs. The second is accomplished by connecting the constructs and measures to an ontological framework with data standards and common data elements such as the NCI Enterprise Vocabulary System (EVS) and the cancer Data Standards Repository (caDSR). This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories--for data sharing). Published by Elsevier Inc.
Towards Semantic e-Science for Traditional Chinese Medicine
Chen, Huajun; Mao, Yuxin; Zheng, Xiaoqing; Cui, Meng; Feng, Yi; Deng, Shuiguang; Yin, Aining; Zhou, Chunying; Tang, Jinming; Jiang, Xiaohong; Wu, Zhaohui
2007-01-01
Background Recent advances in Web and information technologies with the increasing decentralization of organizational structures have resulted in massive amounts of information resources and domain-specific services in Traditional Chinese Medicine. The massive volume and diversity of information and services available have made it difficult to achieve seamless and interoperable e-Science for knowledge-intensive disciplines like TCM. Therefore, information integration and service coordination are two major challenges in e-Science for TCM. We still lack sophisticated approaches to integrate scientific data and services for TCM e-Science. Results We present a comprehensive approach to build dynamic and extendable e-Science applications for knowledge-intensive disciplines like TCM based on semantic and knowledge-based techniques. The semantic e-Science infrastructure for TCM supports large-scale database integration and service coordination in a virtual organization. We use domain ontologies to integrate TCM database resources and services in a semantic cyberspace and deliver a semantically superior experience including browsing, searching, querying and knowledge discovering to users. We have developed a collection of semantic-based toolkits to facilitate TCM scientists and researchers in information sharing and collaborative research. Conclusion Semantic and knowledge-based techniques are suitable to knowledge-intensive disciplines like TCM. It's possible to build on-demand e-Science system for TCM based on existing semantic and knowledge-based techniques. The presented approach in the paper integrates heterogeneous distributed TCM databases and services, and provides scientists with semantically superior experience to support collaborative research in TCM discipline. PMID:17493289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene
InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less
Robinson, Joel E.; Eakins, Barry W.; Kanamatsu, Toshiya; Naka, Jiro; Takahashi, Eiichi; Satake, Kenji; Smith, John R.; Clague, David A.; Yokose, Hisayoshi
2006-01-01
This database release, USGS Data Series 171, contains data collected during four Japan-USA collaborative cruises that characterize the seafloor around the Hawaiian Islands. The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) sponsored cruises in 1998, 1999, 2001, and 2002, to build a greater understanding of the deep marine geology around the Hawaiian Islands. During these cruises, scientists surveyed over 600,000 square kilometers of the seafloor with a hull-mounted multibeam seafloor-mapping sonar system (SEA BEAM® 2112), observed the seafloor and collected samples using robotic and manned submersible dives, collected dredge and piston-core samples, and performed single-channel seismic surveys.
Ferdynus, C; Huiart, L
2016-09-01
Administrative health databases such as the French National Heath Insurance Database - SNIIRAM - are a major tool to answer numerous public health research questions. However the use of such data requires complex and time-consuming data management. Our objective was to develop and make available a tool to optimize cohort constitution within administrative health databases. We developed a process to extract, transform and load (ETL) data from various heterogeneous sources in a standardized data warehouse. This data warehouse is architected as a star schema corresponding to an i2b2 star schema model. We then evaluated the performance of this ETL using data from a pharmacoepidemiology research project conducted in the SNIIRAM database. The ETL we developed comprises a set of functionalities for creating SAS scripts. Data can be integrated into a standardized data warehouse. As part of the performance assessment of this ETL, we achieved integration of a dataset from the SNIIRAM comprising more than 900 million lines in less than three hours using a desktop computer. This enables patient selection from the standardized data warehouse within seconds of the request. The ETL described in this paper provides a tool which is effective and compatible with all administrative health databases, without requiring complex database servers. This tool should simplify cohort constitution in health databases; the standardization of warehouse data facilitates collaborative work between research teams. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2017-09-01
Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sweileh, Waleed M; Zyoud, Sa'ed H; Al-Jabi, Samah W; Sawalha, Ansam F
2015-01-01
The objective of this study was to analyze quantity, assess quality, and investigate international collaboration in research from Arab countries in the field of public, environmental and occupational health. Original scientific articles and reviews published from the 22 Arab countries in the category "public, environmental & occupational health" during the study period (1900 - 2012) were screened using the ISI Web of Science database. The total number of original and review research articles published in the category of "public, environmental & occupational health" from Arab countries was 4673. Main area of research was tropical medicine (1862; 39.85%). Egypt with 1200 documents (25.86%) ranked first in quantity and ranked first in quality of publications (h-index = 51). The study identified 2036 (43.57%) documents with international collaboration. Arab countries actively collaborated with authors in Western Europe (22.91%) and North America (21.04%). Most of the documents (79.9%) were published in public health related journals while 21% of the documents were published in journals pertaining to prevention medicine, environmental, occupational health and epidemiology. Research in public, environmental and occupational health in Arab countries is in the rise. Public health research was dominant while environmental and occupation health research was relatively low. International collaboration was a good tool for increasing research quantity and quality.
Chiang, Rachelle Johnsson; Meagher, Whitney; Slade, Sean
2015-01-01
BACKGROUND The Whole School, Whole Community, Whole Child (WSCC) model calls for greater collaboration across the community, school, and health sectors to meet the needs and support the full potential of each child. This article reports on how 3 states and 2 local school districts have implemented aspects of the WSCC model through collaboration, leadership and policy creation, alignment, and implementation. METHODS We searched state health and education department websites, local school district websites, state legislative databases, and sources of peer-reviewed and gray literature to identify materials demonstrating adoption and implementation of coordinated school health, the WSCC model, and associated policies and practices in identified states and districts. We conducted informal interviews in each state and district to reinforce the document review. RESULTS States and local school districts have been able to strategically increase collaboration, integration, and alignment of health and education through the adoption and implementation of policy and practice supporting the WSCC model. Successful utilization of the WSCC model has led to substantial positive changes in school health environments, policies, and practices. CONCLUSIONS Collaboration among health and education sectors to integrate and align services may lead to improved efficiencies and better health and education outcomes for students. PMID:26440819
Brawer, Peter A; Martielli, Richard; Pye, Patrice L; Manwaring, Jamie; Tierney, Anna
2010-06-01
The primary care health setting is in crisis. Increasing demand for services, with dwindling numbers of providers, has resulted in decreased access and decreased satisfaction for both patients and providers. Moreover, the overwhelming majority of primary care visits are for behavioral and mental health concerns rather than issues of a purely medical etiology. Integrated-collaborative models of health care delivery offer possible solutions to this crisis. The purpose of this article is to review the existing data available after 2 years of the St. Louis Initiative for Integrated Care Excellence; an example of integrated-collaborative care on a large scale model within a regional Veterans Affairs Health Care System. There is clear evidence that the SLI(2)CE initiative rather dramatically increased access to health care, and modified primary care practitioners' willingness to address mental health issues within the primary care setting. In addition, data suggests strong fidelity to a model of integrated-collaborative care which has been successful in the past. Integrated-collaborative care offers unique advantages to the traditional view and practice of medical care. Through careful implementation and practice, success is possible on a large scale model. PsycINFO Database Record (c) 2010 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Ferré, Hélène; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Cloché, Sophie; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mière, Arnaud; Ramage, Karim; Vermeulen, Anne; Boulanger, Damien
2015-04-01
The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters , intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between ICARE, IPSL and OMP data centers and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. ChArMEx data, either produced or used by the project, are documented and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The website offers the usual but user-friendly functionalities: data catalog, user registration procedure, search tool to select and access data... The metadata (data description) are standardized, and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A Digital Object Identifier (DOI) assignement procedure allows to automatically register the datasets, in order to make them easier to access, cite, reuse and verify. At present, the ChArMEx database contains about 120 datasets, including more than 80 in situ datasets (2012, 2013 and 2014 summer campaigns, background monitoring station of Ersa...), 25 model output sets (dust model intercomparison, MEDCORDEX scenarios...), a high resolution emission inventory over the Mediterranean... Many in situ datasets have been inserted in a relational database, in order to enable more accurate selection and download of different datasets in a shared format. Many dedicated satellite products (SEVIRI, TRIMM, PARASOL...) are processed and will soon be accessible through the database website. In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx campaigns, a day-to-day chart display website has been developed and operated: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods. Every scientist is invited to visit the ChArMEx websites, to register and to request data. Feel free to contact charmex-database@sedoo.fr for any question.
NASA Astrophysics Data System (ADS)
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
2010-05-01
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.
NASA Astrophysics Data System (ADS)
Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne
2013-04-01
The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products (SEVIRI, TRIMM, PARASOL...) stored in the ICARE data archive using OpeNDAP protocole The website will soon propose new facilities. In particular, many in situ datasets will be homogenized and inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 pre-campaign and the 2013 experiment, a day-to-day quick look and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semkova, Valentina; Otuka, Naohiko; Mikhailiukova, Marina
Members of the International Network of Nuclear Reaction Data Centres (NRDC) have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Here, recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.
The Forum, 1998-2002. Research Forum on Children, Families, and the New Federalism.
ERIC Educational Resources Information Center
Oshinsky, Carole J., Ed.
2002-01-01
This document contains 16 issues of the first 5 years of a newsletter encouraging collaborative research and informed policy on welfare reform and focusing on the use of an on-line database of child welfare research projects, as well as research and policy issues related to implementation studies, indicators of well-being, and administrative data.…
Caram, L B; Linefsky, J P; Read, K M; Murdoch, D R; Lalani, T; Woods, C W; Reller, L B; Kanj, S S; Premru, M M; Ryan, S; Al-Hegelan, M; Donnio, P Y; Orezzi, C; Paiva, M G; Tribouilloy, C; Watkin, R; Harris, O; Eisen, D P; Corey, G R; Cabell, C H; Petti, C A
2008-02-01
Leptotrichia species typically colonize the oral cavity and genitourinary tract. We report the first two cases of endocarditis secondary to L. goodfellowii sp. nov. Both cases were identified using 16S rRNA gene sequencing. Review of the English literature revealed only two other cases of Leptotrichia sp. endocarditis.
National Software Reference Library (NSRL)
National Institute of Standards and Technology Data Gateway
National Software Reference Library (NSRL) (PC database for purchase) A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.
ERIC Educational Resources Information Center
Coffee, Gina; Newell, Markeda L.; Kennedy, Adam S.
2014-01-01
The purpose of this article is to provide an explanation of how effective reading interventions are identified. Through a review of the National Reading Panel's general findings, along with a review of systems currently used to evaluate and disseminate specific reading interventions, a discussion of what works in reading is presented. The…
A Literature Review of Randomized Controlled Trials of the Organization of Care at the End of Life
ERIC Educational Resources Information Center
Thomas, Roger E.; Wilson, Donna; Sheps, Sam
2006-01-01
We searched nine electronic databases for randomized controlled trials (RCTs) about care at the end of life and found 23 RCTs. We assessed their quality using the criteria of the Cochrane Collaboration. The RCTs researched three themes: (a) the effect of providing palliative care through dedicated community teams on quality of life, on the…
Daniel, Gregory W; Cazé, Alexis; Romine, Morgan H; Audibert, Céline; Leff, Jonathan S; McClellan, Mark B
2015-02-01
New drugs and biologics have had a tremendous impact on the treatment of many diseases. However, available measures suggest that pharmaceutical innovation has remained relatively flat, despite substantial growth in research and development spending. We review recent literature on pharmaceutical innovation to identify limitations in measuring and assessing innovation, and we describe the framework and collaborative approach we are using to develop more comprehensive, publicly available metrics for innovation. Our research teams at the Brookings Institution and Deerfield Institute are collaborating with experts from multiple areas of drug development and regulatory review to identify and collect comprehensive data elements related to key development and regulatory characteristics for each new molecular entity approved over the past several decades in the United States and the European Union. Subsequent phases of our effort will add data on downstream product use and patient outcomes and will also include drugs that have failed or been abandoned in development. Such a database will enable researchers to better analyze the drivers of drug innovation, trends in the output of new medicines, and the effect of policy efforts designed to improve innovation. Project HOPE—The People-to-People Health Foundation, Inc.
TheHiveDB image data management and analysis framework.
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-06
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.
A collaborative platform for consensus sessions in pathology over Internet.
Zapletal, Eric; Le Bozec, Christel; Degoulet, Patrice; Jaulent, Marie-Christine
2003-01-01
The design of valid databases in pathology faces the problem of diagnostic disagreement between pathologists. Organizing consensus sessions between experts to reduce the variability is a difficult task. The TRIDEM platform addresses the issue to organize consensus sessions in pathology over the Internet. In this paper, we present the basis to achieve such collaborative platform. On the one hand, the platform integrates the functionalities of the IDEM consensus module that alleviates the consensus task by presenting to pathologists preliminary computed consensus through ergonomic interfaces (automatic step). On the other hand, a set of lightweight interaction tools such as vocal annotations are implemented to ease the communication between experts as they discuss a case (interactive step). The architecture of the TRIDEM platform is based on a Java-Server-Page web server that communicate with the ObjectStore PSE/PRO database used for the object storage. The HTML pages generated by the web server run Java applets to perform the different steps (automatic and interactive) of the consensus. The current limitations of the platform is to only handle a synchronous process. Moreover, improvements like re-writing the consensus workflow with a protocol such as BPML are already forecast.
Harnessing Nutrigenomics: Development of web-based communication, databases, resources, and tools.
Kaput, Jim; Astley, Siân; Renkema, Marten; Ordovas, Jose; van Ommen, Ben
2006-03-01
Nutrient - gene interactions are responsible for maintaining health and preventing or delaying disease. Unbalanced diets for a given genotype lead to chronic diseases such as obesity, diabetes, cardiovascular, and are likely to contribute to increased severity and/or early-onset of many age-related diseases. Many nutrition and many genetic studies still fail to properly include both variables in the design, execution, and analyses of human, laboratory animal, or cell culture experiments. The complexity ofnutrient-gene interactions has led to the realization that strategic international alliances are needed to improve the completeness of nutrigenomic studies - a task beyond the capabilities of a single laboratory team. Eighty-eight researchers from 22 countries recently outlined the issues and challenges for harnessing the nutritional genomics for public and personal health. The next step in the process of forming productive international alliances is the development of a virtual center for organizing collaborations and communications that foster resources sharing, best practices improvements, and creation of databases. We describe here plans and initial efforts of creating the Nutrigenomics Information Portal, a web-based resource for the international nutrigenomics society. This portal aims at becoming the prime source ofinformation and interaction for nutrigenomics scientists through a collaborative effort.
ChemBank: a small-molecule screening and cheminformatics resource database.
Seiler, Kathleen Petri; George, Gregory A; Happ, Mary Pat; Bodycombe, Nicole E; Carrinski, Hyman A; Norton, Stephanie; Brudz, Steve; Sullivan, John P; Muhlich, Jeremy; Serrano, Martin; Ferraiolo, Paul; Tolliday, Nicola J; Schreiber, Stuart L; Clemons, Paul A
2008-01-01
ChemBank (http://chembank.broad.harvard.edu/) is a public, web-based informatics environment developed through a collaboration between the Chemical Biology Program and Platform at the Broad Institute of Harvard and MIT. This knowledge environment includes freely available data derived from small molecules and small-molecule screens and resources for studying these data. ChemBank is unique among small-molecule databases in its dedication to the storage of raw screening data, its rigorous definition of screening experiments in terms of statistical hypothesis testing, and its metadata-based organization of screening experiments into projects involving collections of related assays. ChemBank stores an increasingly varied set of measurements derived from cells and other biological assay systems treated with small molecules. Analysis tools are available and are continuously being developed that allow the relationships between small molecules, cell measurements, and cell states to be studied. Currently, ChemBank stores information on hundreds of thousands of small molecules and hundreds of biomedically relevant assays that have been performed at the Broad Institute by collaborators from the worldwide research community. The goal of ChemBank is to provide life scientists unfettered access to biomedically relevant data and tools heretofore available primarily in the private sector.
TheHiveDB image data management and analysis framework
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-01
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000
Pereira, R; Alves, C; Aler, M; Amorim, A; Arévalo, C; Betancor, E; Braganholi, D; Bravo, M L; Brito, P; Builes, J J; Burgos, G; Carvalho, E F; Castillo, A; Catanesi, C I; Cicarelli, R M B; Coufalova, P; Dario, P; D'Amato, M E; Davison, S; Ferragut, J; Fondevila, M; Furfuro, S; García, O; Gaviria, A; Gomes, I; González, E; Gonzalez-Liñan, A; Gross, T E; Hernández, A; Huang, Q; Jiménez, S; Jobim, L F; López-Parra, A M; Marino, M; Marques, S; Martínez-Cortés, G; Masciovecchio, V; Parra, D; Penacino, G; Pinheiro, M F; Porto, M J; Posada, Y; Restrepo, C; Ribeiro, T; Rubio, L; Sala, A; Santurtún, A; Solís, L S; Souto, L; Streitemberger, E; Torres, A; Vilela-Lamego, C; Yunis, J J; Yurrebaso, I; Gusmão, L
2018-01-01
A collaborative effort was carried out by the Spanish and Portuguese Speaking Working Group of the International Society for Forensic Genetics (GHEP-ISFG) to promote knowledge exchange between associate laboratories interested in the implementation of indel-based methodologies and build allele frequency databases of 38 indels for forensic applications. These databases include populations from different countries that are relevant for identification and kinship investigations undertaken by the participating laboratories. Before compiling population data, participants were asked to type the 38 indels in blind samples from annual GHEP-ISFG proficiency tests, using an amplification protocol previously described. Only laboratories that reported correct results contributed with population data to this study. A total of 5839 samples were genotyped from 45 different populations from Africa, America, East Asia, Europe and Middle East. Population differentiation analysis showed significant differences between most populations studied from Africa and America, as well as between two Asian populations from China and East Timor. Low F ST values were detected among most European populations. Overall diversities and parameters of forensic efficiency were high in populations from all continents. Copyright © 2017 Elsevier B.V. All rights reserved.
Ko, Seung Hyun; Han, Kyungdo; Lee, Yong Ho; Noh, Junghyun; Park, Cheol Young; Kim, Dae Jung; Jung, Chang Hee; Lee, Ki Up; Ko, Kyung Soo
2018-04-01
Korea's National Healthcare Program, the National Health Insurance Service (NHIS), a government-affiliated agency under the Korean Ministry of Health and Welfare, covers the entire Korean population. The NHIS supervises all medical services in Korea and establishes a systematic National Health Information database (DB). A health information DB system including all of the claims, medications, death information, and health check-ups, both in the general population and in patients with various diseases, is not common worldwide. On June 9, 2014, the NHIS signed a memorandum of understanding with the Korean Diabetes Association (KDA) to provide limited open access to its DB. By October 31, 2017, seven papers had been published through this collaborative research project. These studies were conducted to investigate the past and current status of type 2 diabetes mellitus and its complications and management in Korea. This review is a brief summary of the collaborative projects between the KDA and the NHIS over the last 3 years. According to the analysis, the national health check-up DB or claim DB were used, and the age category or study period were differentially applied. Copyright © 2018 Korean Diabetes Association.
NASA Astrophysics Data System (ADS)
Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne
2014-05-01
The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Menditto, Enrica; Bolufer De Gea, Angela; Cahir, Caitriona; Marengoni, Alessandra; Riegler, Salvatore; Fico, Giuseppe; Costa, Elisio; Monaco, Alessandro; Pecorelli, Sergio; Pani, Luca; Prados-Torres, Alexandra
2016-01-01
Computerized health care databases have been widely described as an excellent opportunity for research. The availability of “big data” has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on “adherence to prescription and medical plans” identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners with the aim of improving data sharing on a European level. A total of six databases belonging to three different European countries (Spain, Republic of Ireland, and Italy) were included in the analysis. Preliminary results suggest that there are some similarities. However, these results should be applied in different contexts and European countries, supporting the idea that large European studies should be designed in order to get the most of already available databases. PMID:27358570
Translation from the collaborative OSM database to cartography
NASA Astrophysics Data System (ADS)
Hayat, Flora
2018-05-01
The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine
Elsik, Christine G.; Tayal, Aditi; Diesh, Colin M.; Unni, Deepak R.; Emery, Marianne L.; Nguyen, Hung N.; Hagen, Darren E.
2016-01-01
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. PMID:26578564
A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.
Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis
2012-07-01
The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.
Yellow fever vaccine-associated viscerotropic disease: current perspectives.
Thomas, Roger E
2016-01-01
To assess those published cases of yellow fever (YF) vaccine-associated viscerotropic disease that meet the Brighton Collaboration criteria and to assess the safety of YF vaccine with respect to viscerotropic disease. Ten electronic databases were searched with no restriction of date or language and reference lists of retrieved articles. All abstracts and titles were independently read by two reviewers and data independently entered by two reviewers. All serious adverse events that met the Brighton Classification criteria were associated with first YF vaccinations. Sixty-two published cases (35 died) met the Brighton Collaboration viscerotropic criteria, with 32 from the US, six from Brazil, five from Peru, three from Spain, two from the People's Republic of China, one each from Argentina, Australia, Belgium, Ecuador, France, Germany, Ireland, New Zealand, Portugal, and the UK, and four with no country stated. Two cases met both the viscerotropic and YF vaccine-associated neurologic disease criteria. Seventy cases proposed by authors as viscerotropic disease did not meet any Brighton Collaboration viscerotropic level of diagnostic certainty or any YF vaccine-associated viscerotropic disease causality criteria (37 died). Viscerotropic disease is rare in the published literature and in pharmacovigilance databases. All published cases were from developing countries. Because the symptoms are usually very severe and life threatening, it is unlikely that cases would not come to medical attention (but might not be published). Because viscerotropic disease has a highly predictable pathologic course, it is likely that viscerotropic disease post-YF vaccine occurs in low-income countries with the same incidence as in developing countries. YF vaccine is a very safe vaccine that likely confers lifelong immunity.
A database of charged cosmic rays
NASA Astrophysics Data System (ADS)
Maurin, D.; Melot, F.; Taillet, R.
2014-09-01
Aims: This paper gives a description of a new online database and associated online tools (data selection, data export, plots, etc.) for charged cosmic-ray measurements. The experimental setups (type, flight dates, techniques) from which the data originate are included in the database, along with the references to all relevant publications. Methods: The database relies on the MySQL5 engine. The web pages and queries are based on PHP, AJAX and the jquery, jquery.cluetip, jquery-ui, and table-sorter third-party libraries. Results: In this first release, we restrict ourselves to Galactic cosmic rays with Z ≤ 30 and a kinetic energy per nucleon up to a few tens of TeV/n. This corresponds to more than 200 different sub-experiments (i.e., different experiments, or data from the same experiment flying at different times) in as many publications. Conclusions: We set up a cosmic-ray database (CRDB) and provide tools to sort and visualise the data. New data can be submitted, providing the community with a collaborative tool to archive past and future cosmic-ray measurements. http://lpsc.in2p3.fr/crdb; Contact: crdatabase@lpsc.in2p3.fr
García-Sancho, Miguel
2011-01-01
This paper explores the introduction of professional systems engineers and information management practices into the first centralized DNA sequence database, developed at the European Molecular Biology Laboratory (EMBL) during the 1980s. In so doing, it complements the literature on the emergence of an information discourse after World War II and its subsequent influence in biological research. By the careers of the database creators and the computer algorithms they designed, analyzing, from the mid-1960s onwards information in biology gradually shifted from a pervasive metaphor to be embodied in practices and professionals such as those incorporated at the EMBL. I then investigate the reception of these database professionals by the EMBL biological staff, which evolved from initial disregard to necessary collaboration as the relationship between DNA, genes, and proteins turned out to be more complex than expected. The trajectories of the database professionals at the EMBL suggest that the initial subject matter of the historiography of genomics should be the long-standing practices that emerged after World War II and to a large extent originated outside biomedicine and academia. Only after addressing these practices, historians may turn to their further disciplinary assemblage in fields such as bioinformatics or biotechnology.
Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning
NASA Astrophysics Data System (ADS)
de, Sandip; Ceriotti, Michele
Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.
Official crime data versus collaborative crime mapping at a Brazilian city
NASA Astrophysics Data System (ADS)
Brito, P. L.; Jesus, E. G. V.; Sant'Ana, R. M. S.; Martins, C.; Delgado, J. P. M.; Fernandes, V. O.
2014-11-01
In July of 2013 a group of undergraduate students from the Federal University of Bahia, Brazil, published a collaborative web map called "Where I Was Robbed". Their initial efforts in publicizing their web map were restricted to announce it at a local radio as a tool of social interest. In two months the map had almost 10.000 reports, 155 reports per day and people from more the 350 cities had already reported a crime. The present study consists in an investigation about this collaborative web map spatial correlation to official robbery data registered at the Secretary of Public Safety database, for the city of Salvador, Bahia. Kernel density estimator combined with map algebra was used to the investigation. Spatial correlations with official robbery data for the city of Salvador were not found initially, but after standardizing collaborative data and mining official registers, both data pointed at very similar areas as the main hot spots for pedestrian robbery. Both areas are located at two of the most economical active areas of the city, although web map crimes reports were more concentrated in an area with higher income population. This results and discussions indicates that this collaborative application is been used mainly by mid class and upper class parcel of the city population, but can still provide significant information on public safety priority areas. Therefore, extended divulgation, on local papers, radio and TV, of the collaborative crime map application and partnership with official agencies are strongly recommended.
Burisch, Johan; Cukovic-Cavka, Silvija; Kaimakliotis, Ioannis; Shonová, Olga; Andersen, Vibeke; Dahlerup, Jens F; Elkjaer, Margarita; Langholz, Ebbe; Pedersen, Natalia; Salupere, Riina; Kolho, Kaija-Leena; Manninen, Pia; Lakatos, Peter Laszlo; Shuhaibar, Mary; Odes, Selwyn; Martinato, Matteo; Mihu, Ion; Magro, Fernando; Belousova, Elena; Fernandez, Alberto; Almer, Sven; Halfvarson, Jonas; Hart, Ailsa; Munkholm, Pia
2011-08-01
The EpiCom-study investigates a possible East-West-gradient in Europe in the incidence of IBD and the association with environmental factors. A secured web-based database is used to facilitate and centralize data registration. To construct and validate a web-based inception cohort database available in both English and Russian language. The EpiCom database has been constructed in collaboration with all 34 participating centers. The database was translated into Russian using forward translation, patient questionnaires were translated by simplified forward-backward translation. Data insertion implies fulfillment of international diagnostic criteria, disease activity, medical therapy, quality of life, work productivity and activity impairment, outcome of pregnancy, surgery, cancer and death. Data is secured by the WinLog3 System, developed in cooperation with the Danish Data Protection Agency. Validation of the database has been performed in two consecutive rounds, each followed by corrections in accordance with comments. The EpiCom database fulfills the requirements of the participating countries' local data security agencies by being stored at a single location. The database was found overall to be "good" or "very good" by 81% of the participants after the second validation round and the general applicability of the database was evaluated as "good" or "very good" by 77%. In the inclusion period January 1st -December 31st 2010 1336 IBD patients have been included in the database. A user-friendly, tailor-made and secure web-based inception cohort database has been successfully constructed, facilitating remote data input. The incidence of IBD in 23 European countries can be found at www.epicom-ecco.eu. Copyright © 2011 European Crohn's and Colitis Organisation. All rights reserved.
Very large database of lipids: rationale and design.
Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R
2013-11-01
Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.
Quebec Trophoblastic Disease Registry: how to make an easy-to-use dynamic database.
Sauthier, Philippe; Breguet, Magali; Rozenholc, Alexandre; Sauthier, Michaël
2015-05-01
To create an easy-to-use dynamic database designed specifically for the Quebec Trophoblastic Disease Registry (RMTQ). It is now well established that much of the success in managing trophoblastic diseases comes from the development of national and regional reference centers. Computerized databases allow the optimal use of data stored in these centers. We have created an electronic data registration system by producing a database using FileMaker Pro 12. It uses 11 external tables associated with a unique identification number for each patient. Each table allows specific data to be recorded, incorporating demographics, diagnosis, automated staging, laboratory values, pathological diagnosis, and imaging parameters. From January 1, 2009, to December 31, 2013, we used our database to register 311 patients with 380 diseases and have seen a 39.2% increase in registrations each year between 2009 and 2012. This database allows the automatic generation of semilogarithmic curves, which take into account β-hCG values as a function of time, complete with graphic markers for applied treatments (chemotherapy, radiotherapy, or surgery). It generates a summary sheet for a synthetic vision in real time. We have created, at a low cost, an easy-to-use database specific to trophoblastic diseases that dynamically integrates staging and monitoring. We propose a 10-step procedure for a successful trophoblastic database. It improves patient care, research, and education on trophoblastic diseases in Quebec and leads to an opportunity for collaboration on a national Canadian registry.
The Practice Integration Profile: Rationale, development, method, and research.
Macchi, C R; Kessler, Rodger; Auxier, Andrea; Hitt, Juvena R; Mullin, Daniel; van Eeghen, Constance; Littenberg, Benjamin
2016-12-01
Insufficient knowledge exists regarding how to measure the presence and degree of integrated care. Prior estimates of integration levels are neither grounded in theory nor psychometrically validated. They provide scant guidance to inform improvement activities, compare integration efforts, discriminate among practices by degree of integration, measure the effect of integration on quadruple aim outcomes, or address the needs of clinicians, regulators, and policymakers seeking new models of health care delivery and funding. We describe the development of the Practice Integration Profile (PIP), a novel instrument designed to measure levels of integrated behavioral health care within a primary care clinic. The PIP draws upon the Agency for Health care Research & Quality's (AHRQ) Lexicon of Collaborative Care which provides theoretic justification for a paradigm case of collaborative care. We used the key clauses of the Lexicon to derive domains of integration and generate measures corresponding to those key clauses. After reviewing currently used methods for identifying collaborative care, or integration, and identifying the need to improve on them, we describe a national collaboration to describe and evaluate the PIP. We also describe its potential use in practice improvement, research, responsiveness to multiple stakeholder needs, and other future directions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Collaborative care: Using six thinking hats for decision making.
Cioffi, Jane Marie
2017-12-01
To apply six thinking hats technique for decision making in collaborative care. In collaborative partnerships, effective communications need to occur in patient, family, and health care professional meetings. The effectiveness of these meetings depends on the engagement of participants and the quality of the meeting process. The use of six thinking hats technique to engage all participants in effective dialogue is proposed. Discussion paper. Electronic databases, CINAHL, Pub Med, and Science Direct, were searched for years 1990 to 2017. Using six thinking hats technique in patient family meetings nurses can guide a process of dialogue that focuses decision making to build equal care partnerships inclusive of all participants. Nurses will need to develop the skills for using six thinking hats technique and provide support to all participants during the meeting process. Collaborative decision making can be augmented by six thinking hat technique to provide patients, families, and health professionals with opportunities to make informed decisions about care that considers key issues for all involved. Nurses who are most often advocates for patients and their families are in a unique position to lead this initiative in meetings as they network with all health professionals. © 2017 John Wiley & Sons Australia, Ltd.
The importance of international collaboration for rare diseases research: a European perspective
Julkowska, D; Austin, C P; Cutillo, C M; Gancberg, D; Hager, C; Halftermeyer, J; Jonker, A H; Lau, L P L; Norstedt, I; Rath, A; Schuster, R; Simelyte, E; van Weely, S
2017-01-01
Over the last two decades, important contributions were made at national, European and international levels to foster collaboration into rare diseases research. The European Union (EU) has put much effort into funding rare diseases research, encouraging national funding organizations to collaborate together in the E-Rare program, setting up European Reference Networks for rare diseases and complex conditions, and initiating the International Rare Diseases Research Consortium (IRDiRC) together with the National Institutes of Health in the USA. Co-ordination of the activities of funding agencies, academic researchers, companies, regulatory bodies, and patient advocacy organizations and partnerships with, for example, the European Research Infrastructures maximizes the collective impact of global investments in rare diseases research. This contributes to accelerating progress, for example, in faster diagnosis through enhanced discovery of causative genes, better understanding of natural history of rare diseases through creation of common registries and databases and boosting of innovative therapeutic approaches. Several examples of funded pre-clinical and clinical gene therapy projects show that integration of multinational and multidisciplinary expertize generates new knowledge and can result in multicentre gene therapy trials. International collaboration in rare diseases research is key to improve the life of people living with a rare disease. PMID:28440796
Problem formulation, metrics, open government, and on-line collaboration
NASA Astrophysics Data System (ADS)
Ziegler, C. R.; Schofield, K.; Young, S.; Shaw, D.
2010-12-01
Problem formulation leading to effective environmental management, including synthesis and application of science by government agencies, may benefit from collaborative on-line environments. This is illustrated by two interconnected projects: 1) literature-based evidence tools that support causal assessment and problem formulation, and 2) development of output, outcome, and sustainability metrics for tracking environmental conditions. Specifically, peer-production mechanisms allow for global contribution to science-based causal evidence databases, and subsequent crowd-sourced development of causal networks supported by that evidence. In turn, science-based causal networks may inform problem formulation and selection of metrics or indicators to track environmental condition (or problem status). Selecting and developing metrics in a collaborative on-line environment may improve stakeholder buy-in, the explicit relevance of metrics to planning, and the ability to approach problem apportionment or accountability, and to define success or sustainability. Challenges include contribution governance, data-sharing incentives, linking on-line interfaces to data service providers, and the intersection of environmental science and social science. Degree of framework access and confidentiality may vary by group and/or individual, but may ultimately be geared at demonstrating connections between science and decision making and supporting a culture of open government, by fostering transparency, public engagement, and collaboration.
The importance of international collaboration for rare diseases research: a European perspective.
Julkowska, D; Austin, C P; Cutillo, C M; Gancberg, D; Hager, C; Halftermeyer, J; Jonker, A H; Lau, L P L; Norstedt, I; Rath, A; Schuster, R; Simelyte, E; van Weely, S
2017-09-01
Over the last two decades, important contributions were made at national, European and international levels to foster collaboration into rare diseases research. The European Union (EU) has put much effort into funding rare diseases research, encouraging national funding organizations to collaborate together in the E-Rare program, setting up European Reference Networks for rare diseases and complex conditions, and initiating the International Rare Diseases Research Consortium (IRDiRC) together with the National Institutes of Health in the USA. Co-ordination of the activities of funding agencies, academic researchers, companies, regulatory bodies, and patient advocacy organizations and partnerships with, for example, the European Research Infrastructures maximizes the collective impact of global investments in rare diseases research. This contributes to accelerating progress, for example, in faster diagnosis through enhanced discovery of causative genes, better understanding of natural history of rare diseases through creation of common registries and databases and boosting of innovative therapeutic approaches. Several examples of funded pre-clinical and clinical gene therapy projects show that integration of multinational and multidisciplinary expertize generates new knowledge and can result in multicentre gene therapy trials. International collaboration in rare diseases research is key to improve the life of people living with a rare disease.
Bigger Data, Collaborative Tools and the Future of Predictive Drug Discovery
Clark, Alex M.; Swamidass, S. Joshua; Litterman, Nadia; Williams, Antony J.
2014-01-01
Over the past decade we have seen a growth in the provision of chemistry data and cheminformatics tools as either free websites or software as a service (SaaS) commercial offerings. These have transformed how we find molecule-related data and use such tools in our research. There have also been efforts to improve collaboration between researchers either openly or through secure transactions using commercial tools. A major challenge in the future will be how such databases and software approaches handle larger amounts of data as it accumulates from high throughput screening and enables the user to draw insights, enable predictions and move projects forward. We now discuss how information from some drug discovery datasets can be made more accessible and how privacy of data should not overwhelm the desire to share it at an appropriate time with collaborators. We also discuss additional software tools that could be made available and provide our thoughts on the future of predictive drug discovery in this age of big data. We use some examples from our own research on neglected diseases, collaborations, mobile apps and algorithm development to illustrate these ideas. PMID:24943138
Bookey-Bassett, Sue; Markle-Reid, Maureen; McKey, Colleen; Akhtar-Danesh, Noori
2016-01-01
It is acknowledged internationally that chronic disease management (CDM) for community-living older adults (CLOA) is an increasingly complex process. CDM for older adults, who are often living with multiple chronic conditions, requires coordination of various health and social services. Coordination is enabled through interprofessional collaboration (IPC) among individual providers, community organizations, and health sectors. Measuring IPC is complicated given there are multiple conceptualisations and measures of IPC. A literature review of several healthcare, psychological, and social science electronic databases was conducted to locate instruments that measure IPC at the team level and have published evidence of their reliability and validity. Five instruments met the criteria and were critically reviewed to determine their strengths and limitations as they relate to CDM for CLOA. A comparison of the characteristics, psychometric properties, and overall concordance of each instrument with salient attributes of IPC found the Collaborative Practice Assessment Tool to be the most appropriate instrument for measuring IPC for CDM in CLOA.
Technical note: The Linked Paleo Data framework - a common tongue for paleoclimatology
NASA Astrophysics Data System (ADS)
McKay, Nicholas P.; Emile-Geay, Julien
2016-04-01
Paleoclimatology is a highly collaborative scientific endeavor, increasingly reliant on online databases for data sharing. Yet there is currently no universal way to describe, store and share paleoclimate data: in other words, no standard. Data standards are often regarded by scientists as mere technicalities, though they underlie much scientific and technological innovation, as well as facilitating collaborations between research groups. In this article, we propose a preliminary data standard for paleoclimate data, general enough to accommodate all the archive and measurement types encountered in a large international collaboration (PAGES 2k). We also introduce a vehicle for such structured data (Linked Paleo Data, or LiPD), leveraging recent advances in knowledge representation (Linked Open Data).The LiPD framework enables quick querying and extraction, and we expect that it will facilitate the writing of open-source community codes to access, analyze, model and visualize paleoclimate observations. We welcome community feedback on this standard, and encourage paleoclimatologists to experiment with the format for their own purposes.
Diversity of social ties in scientific collaboration networks
NASA Astrophysics Data System (ADS)
Shi, Quan; Xu, Bo; Xu, Xiaomin; Xiao, Yanghua; Wang, Wei; Wang, Hengshan
2011-11-01
Diversity is one of the important perspectives to characterize behaviors of individuals in social networks. It is intuitively believed that diversity of social ties accounts for competition advantage and idea innovation. However, quantitative evidences in a real large social network can be rarely found in the previous research. Thanks to the availability of scientific publication records on WWW; now we can construct a large scientific collaboration network, which provides us a chance to gain insight into the diversity of relationships in a real social network through statistical analysis. In this article, we dedicate our efforts to perform empirical analysis on a scientific collaboration network extracted from DBLP, an online bibliographic database in computer science, in a systematical way, finding the following: distributions of diversity indices tend to decay in an exponential or Gaussian way; diversity indices are not trivially correlated to existing vertex importance measures; authors of diverse social ties tend to connect to each other and these authors are generally more competitive than others.
Current trends in nursing theories.
Im, Eun-Ok; Chang, Sun Ju
2012-06-01
To explore current trends in nursing theories through an integrated literature review. The literature related to nursing theories during the past 10 years was searched through multiple databases and reviewed to determine themes reflecting current trends in nursing theories. The trends can be categorized into six themes: (a) foci on specifics; (b) coexistence of various types of theories; (c) close links to research; (d) international collaborative works; (e) integration to practice; and (f) selective evolution. We need to make our continuous efforts to link research and practice to theories, to identify specifics of our theories, to develop diverse types of theories, and to conduct international collaborative works. Our paper gives implications for future theoretical development in diverse clinical areas of nursing research and practice. © 2012 Sigma Theta Tau International.
Foundations of teamwork and collaboration.
Driskell, James E; Salas, Eduardo; Driskell, Tripp
2018-01-01
The term teamwork has graced countless motivational posters and office walls. However, although teamwork is often easy to observe, it is somewhat more difficult to describe and yet more difficult to produce. At a broad level, teamwork is the process through which team members collaborate to achieve task goals. Teamwork refers to the activities through which team inputs translate into team outputs such as team effectiveness and satisfaction. In this article, we describe foundational research underlying current research on teamwork. We examine the evolution of team process models and outline primary teamwork dimensions. We discuss selection, training, and design approaches to enhancing teamwork, and note current applications of teamwork research in real-world settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
EarthChem: International Collaboration for Solid Earth Geochemistry in Geoinformatics
NASA Astrophysics Data System (ADS)
Walker, J. D.; Lehnert, K. A.; Hofmann, A. W.; Sarbas, B.; Carlson, R. W.
2005-12-01
The current on-line information systems for igneous rock geochemistry - PetDB, GEOROC, and NAVDAT - convincingly demonstrate the value of rigorous scientific data management of geochemical data for research and education. The next generation of hypothesis formulation and testing can be vastly facilitated by enhancing these electronic resources through integration of available datasets, expansion of data coverage in location, time, and tectonic setting, timely updates with new data, and through intuitive and efficient access and data analysis tools for the broader geosciences community. PetDB, GEOROC, and NAVDAT have therefore formed the EarthChem consortium (www.earthchem.org) as a international collaborative effort to address these needs and serve the larger earth science community by facilitating the compilation, communication, serving, and visualization of geochemical data, and their integration with other geological, geochronological, geophysical, and geodetic information to maximize their scientific application. We report on the status of and future plans for EarthChem activities. EarthChem's development plan includes: (1) expanding the functionality of the web portal to become a `one-stop shop for geochemical data' with search capability across databases, standardized and integrated data output, generally applicable tools for data quality assessment, and data analysis/visualization including plotting methods and an information-rich map interface; and (2) expanding data holdings by generating new datasets as identified and prioritized through community outreach, and facilitating data contributions from the community by offering web-based data submission capability and technical assistance for design, implementation, and population of new databases and their integration with all EarthChem data holdings. Such federated databases and datasets will retain their identity within the EarthChem system. We also plan on working with publishers to ease the assimilation of geochemical data into the EarthChem database. As a community resource, EarthChem will address user concerns and respond to broad scientific and educational needs. EarthChem will hold yearly workshops, town hall meetings, and/or exhibits at major meetings. The group has established a two-tier committee structure to help ease the communication and coordination of database and IT issues between existing data management projects, and to receive feedback and support from individuals and groups from the larger geosciences community.
Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-05-01
The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.
Barnes-Daly, Mary Ann; Pun, Brenda T; Harmon, Lori A; Byrum, Diane G; Kumar, Vishakha K; Devlin, John W; Stollings, Joanna L; Puntillo, Kathleen A; Engel, Heidi J; Posa, Patricia J; Barr, Juliana; Schweickert, William D; Esbrook, Cheryl L; Hargett, Ken D; Carson, Shannon S; Aldrich, J Matthew; Ely, E Wesley; Balas, Michele C
2018-06-01
Patients admitted to intensive care units (ICUs) often experience pain, oversedation, prolonged mechanical ventilation, delirium, and weakness. These conditions are important in that they often lead to protracted physical, neurocognitive, and mental health sequelae now termed postintensive care syndrome. Changing current ICU practice will not only require the adoption of evidence-based interventions but the development of effective and reliable teams to support these new practices. To build on the success of bundled care and bridge an ongoing evidence-practice gap, the Society of Critical Care Medicine (SCCM) recently launched the ICU Liberation ABCDEF Bundle Improvement Collaborative. The Collaborative aimed to foster the bedside application of the SCCM's Pain, Agitation, and Delirium Guidelines via the ABCDEF bundle. The purpose of this paper is to describe the history of the Collaborative, the evidence-based implementation strategies used to foster change and teamwork, and the performance and outcome metrics used to monitor progress. Collaborative participants were required to attend four in-person meetings, monthly colearning calls, database training sessions, an e-Community listserv, and select in-person site visits. Teams submitted patient-level data and completed pre- and postimplementation questionnaires focused on the assessment of teamwork and collaboration, work environment, and overall ICU care. Faculty shared the evidence used to derive each bundle element as well as team-based implementation strategies for improvement and sustainment. Retention in the Collaborative was high, with 67 of 69 adult and eight of nine pediatric ICUs fully completing the program. Baseline and prospective data were collected on over 17,000 critically ill patients. A variety of evidence-based professional behavioral change interventions and novel implementation techniques were utilized and shared among Collaborative members. Hospitals and health systems can use the Collaborative structure, strategies, and tools described in this paper to help successfully implement the ABCDEF bundle in their ICUs. © 2018 Sigma Theta Tau International.
OpenFluDB, a database for human and animal influenza virus
Liechti, Robin; Gleizes, Anne; Kuznetsov, Dmitry; Bougueleret, Lydie; Le Mercier, Philippe; Bairoch, Amos; Xenarios, Ioannis
2010-01-01
Although research on influenza lasted for more than 100 years, it is still one of the most prominent diseases causing half a million human deaths every year. With the recent observation of new highly pathogenic H5N1 and H7N7 strains, and the appearance of the influenza pandemic caused by the H1N1 swine-like lineage, a collaborative effort to share observations on the evolution of this virus in both animals and humans has been established. The OpenFlu database (OpenFluDB) is a part of this collaborative effort. It contains genomic and protein sequences, as well as epidemiological data from more than 27 000 isolates. The isolate annotations include virus type, host, geographical location and experimentally tested antiviral resistance. Putative enhanced pathogenicity as well as human adaptation propensity are computed from protein sequences. Each virus isolate can be associated with the laboratories that collected, sequenced and submitted it. Several analysis tools including multiple sequence alignment, phylogenetic analysis and sequence similarity maps enable rapid and efficient mining. The contents of OpenFluDB are supplied by direct user submission, as well as by a daily automatic procedure importing data from public repositories. Additionally, a simple mechanism facilitates the export of OpenFluDB records to GenBank. This resource has been successfully used to rapidly and widely distribute the sequences collected during the recent human swine flu outbreak and also as an exchange platform during the vaccine selection procedure. Database URL: http://openflu.vital-it.ch. PMID:20624713
NASA Astrophysics Data System (ADS)
Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.
1989-12-01
The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.
Accident/Mishap Investigation System
NASA Technical Reports Server (NTRS)
Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian
2007-01-01
InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.
2014-06-01
databases, etc.) and a great sense of pride when reflecting on his ability to network with other EOD liaison officers via social media websites. His needs ...searching existing data sources, gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments...69 Figure 17. POV map unpacking example .........................................................................70 Figure 18. User needs
Peter S. Murdoch; John L. Hom; Yude Pan; Jeffrey M. Fischer
2008-01-01
To complete the collaborative monitoring study of forested landscapes within the DRB, regional perspective on the cumulative effect of different disturbances on overall ecosystem health. This section describes two modeling activities used as integrating tools for the CEMRI database and a validation system that used nested river monitoring stations.
Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment
2013-06-01
architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation
SPARCCS - Smartphone-Assisted Readiness, Command and Control System
2012-06-01
and database needs. By doing this SPARCCS takes advantage of all the capabilities cloud computing has to offer, especially that of disbursed data...40092829/ Microsoft. (2011). Cloud Computing . Retrieved September 24, 2011, http ://www.microsoft.com/industry/government/guides/cloud_computing/2...Command, and Control System) to address these issues. We use smartphones in conjunction with cloud computing to extend the benefits of collaborative
The collation of forensic DNA case data into a multi-dimensional intelligence database.
Walsh, S J; Moss, D S; Kliem, C; Vintiner, G M
2002-01-01
The primary aim of any DNA Database is to link individuals to unsolved offenses and unsolved offenses to each other via DNA profiling. This aim has been successfully realised during the operation of the New Zealand (NZ) DNA Databank over the past five years. The DNA Intelligence Project (DIP), a collaborative project involving NZ forensic and law enforcement agencies, interrogated the forensic case data held on the NZ DNA databank and collated it into a functional intelligence database. This database has been used to identify significant trends which direct Police and forensic personnel towards the most appropriate use of DNA technology. Intelligence is being provided in areas such as the level of usage of DNA techniques in criminal investigation, the relative success of crime scene samples and the geographical distribution of crimes. The DIP has broadened the dimensions of the information offered through the NZ DNA Databank and has furthered the understanding and investigative capability of both Police and forensic scientists. The outcomes of this research fit soundly with the current policies of 'intelligence led policing', which are being adopted by Police jurisdictions locally and overseas.
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
MycoDB, a global database of plant response to mycorrhizal fungi.
Chaudhary, V Bala; Rúa, Megan A; Antoninka, Anita; Bever, James D; Cannon, Jeffery; Craig, Ashley; Duchicela, Jessica; Frame, Alicia; Gardes, Monique; Gehring, Catherine; Ha, Michelle; Hart, Miranda; Hopkins, Jacob; Ji, Baoming; Johnson, Nancy Collins; Kaonongbua, Wittaya; Karst, Justine; Koide, Roger T; Lamit, Louis J; Meadow, James; Milligan, Brook G; Moore, John C; Pendergast, Thomas H; Piculell, Bridget; Ramsby, Blake; Simard, Suzanne; Shrestha, Shubha; Umbanhowar, James; Viechtbauer, Wolfgang; Walters, Lawrence; Wilson, Gail W T; Zee, Peter C; Hoeksema, Jason D
2016-05-10
Plants form belowground associations with mycorrhizal fungi in one of the most common symbioses on Earth. However, few large-scale generalizations exist for the structure and function of mycorrhizal symbioses, as the nature of this relationship varies from mutualistic to parasitic and is largely context-dependent. We announce the public release of MycoDB, a database of 4,010 studies (from 438 unique publications) to aid in multi-factor meta-analyses elucidating the ecological and evolutionary context in which mycorrhizal fungi alter plant productivity. Over 10 years with nearly 80 collaborators, we compiled data on the response of plant biomass to mycorrhizal fungal inoculation, including meta-analysis metrics and 24 additional explanatory variables that describe the biotic and abiotic context of each study. We also include phylogenetic trees for all plants and fungi in the database. To our knowledge, MycoDB is the largest ecological meta-analysis database. We aim to share these data to highlight significant gaps in mycorrhizal research and encourage synthesis to explore the ecological and evolutionary generalities that govern mycorrhizal functioning in ecosystems.
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
MycoDB, a global database of plant response to mycorrhizal fungi
Chaudhary, V. Bala; Rúa, Megan A.; Antoninka, Anita; Bever, James D.; Cannon, Jeffery; Craig, Ashley; Duchicela, Jessica; Frame, Alicia; Gardes, Monique; Gehring, Catherine; Ha, Michelle; Hart, Miranda; Hopkins, Jacob; Ji, Baoming; Johnson, Nancy Collins; Kaonongbua, Wittaya; Karst, Justine; Koide, Roger T.; Lamit, Louis J.; Meadow, James; Milligan, Brook G.; Moore, John C.; Pendergast IV, Thomas H.; Piculell, Bridget; Ramsby, Blake; Simard, Suzanne; Shrestha, Shubha; Umbanhowar, James; Viechtbauer, Wolfgang; Walters, Lawrence; Wilson, Gail W. T.; Zee, Peter C.; Hoeksema, Jason D.
2016-01-01
Plants form belowground associations with mycorrhizal fungi in one of the most common symbioses on Earth. However, few large-scale generalizations exist for the structure and function of mycorrhizal symbioses, as the nature of this relationship varies from mutualistic to parasitic and is largely context-dependent. We announce the public release of MycoDB, a database of 4,010 studies (from 438 unique publications) to aid in multi-factor meta-analyses elucidating the ecological and evolutionary context in which mycorrhizal fungi alter plant productivity. Over 10 years with nearly 80 collaborators, we compiled data on the response of plant biomass to mycorrhizal fungal inoculation, including meta-analysis metrics and 24 additional explanatory variables that describe the biotic and abiotic context of each study. We also include phylogenetic trees for all plants and fungi in the database. To our knowledge, MycoDB is the largest ecological meta-analysis database. We aim to share these data to highlight significant gaps in mycorrhizal research and encourage synthesis to explore the ecological and evolutionary generalities that govern mycorrhizal functioning in ecosystems. PMID:27163938
Long-term cycles in the history of life: periodic biodiversity in the paleobiology database.
Melott, Adrian L
2008-01-01
Time series analysis of fossil biodiversity of marine invertebrates in the Paleobiology Database (PBDB) shows a significant periodicity at approximately 63 My, in agreement with previous analyses based on the Sepkoski database. I discuss how this result did not appear in a previous analysis of the PBDB. The existence of the 63 My periodicity, despite very different treatment of systematic error in both PBDB and Sepkoski databases strongly argues for consideration of its reality in the fossil record. Cross-spectral analysis of the two datasets finds that a 62 My periodicity coincides in phase by 1.6 My, equivalent to better than the errors in either measurement. Consequently, the two data sets not only contain the same strong periodicity, but its peaks and valleys closely correspond in time. Two other spectral peaks appear in the PBDB analysis, but appear to be artifacts associated with detrending and with the increased interval length. Sampling-standardization procedures implemented by the PBDB collaboration suggest that the signal is not an artifact of sampling bias. Further work should focus on finding the cause of the 62 My periodicity.
Distributed data collection for a database of radiological image interpretations
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
A tuberculosis biomarker database: the key to novel TB diagnostics.
Yerlikaya, Seda; Broger, Tobias; MacLean, Emily; Pai, Madhukar; Denkinger, Claudia M
2017-03-01
New diagnostic innovations for tuberculosis (TB), including point-of-care solutions, are critical to reach the goals of the End TB Strategy. However, despite decades of research, numerous reports on new biomarker candidates, and significant investment, no well-performing, simple and rapid TB diagnostic test is yet available on the market, and the search for accurate, non-DNA biomarkers remains a priority. To help overcome this 'biomarker pipeline problem', FIND and partners are working on the development of a well-curated and user-friendly TB biomarker database. The web-based database will enable the dynamic tracking of evidence surrounding biomarker candidates in relation to target product profiles (TPPs) for needed TB diagnostics. It will be able to accommodate raw datasets and facilitate the verification of promising biomarker candidates and the identification of novel biomarker combinations. As such, the database will simplify data and knowledge sharing, empower collaboration, help in the coordination of efforts and allocation of resources, streamline the verification and validation of biomarker candidates, and ultimately lead to an accelerated translation into clinically useful tools. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
MycoDB, a global database of plant response to mycorrhizal fungi
NASA Astrophysics Data System (ADS)
Chaudhary, V. Bala; Rúa, Megan A.; Antoninka, Anita; Bever, James D.; Cannon, Jeffery; Craig, Ashley; Duchicela, Jessica; Frame, Alicia; Gardes, Monique; Gehring, Catherine; Ha, Michelle; Hart, Miranda; Hopkins, Jacob; Ji, Baoming; Johnson, Nancy Collins; Kaonongbua, Wittaya; Karst, Justine; Koide, Roger T.; Lamit, Louis J.; Meadow, James; Milligan, Brook G.; Moore, John C.; Pendergast, Thomas H., IV; Piculell, Bridget; Ramsby, Blake; Simard, Suzanne; Shrestha, Shubha; Umbanhowar, James; Viechtbauer, Wolfgang; Walters, Lawrence; Wilson, Gail W. T.; Zee, Peter C.; Hoeksema, Jason D.
2016-05-01
Plants form belowground associations with mycorrhizal fungi in one of the most common symbioses on Earth. However, few large-scale generalizations exist for the structure and function of mycorrhizal symbioses, as the nature of this relationship varies from mutualistic to parasitic and is largely context-dependent. We announce the public release of MycoDB, a database of 4,010 studies (from 438 unique publications) to aid in multi-factor meta-analyses elucidating the ecological and evolutionary context in which mycorrhizal fungi alter plant productivity. Over 10 years with nearly 80 collaborators, we compiled data on the response of plant biomass to mycorrhizal fungal inoculation, including meta-analysis metrics and 24 additional explanatory variables that describe the biotic and abiotic context of each study. We also include phylogenetic trees for all plants and fungi in the database. To our knowledge, MycoDB is the largest ecological meta-analysis database. We aim to share these data to highlight significant gaps in mycorrhizal research and encourage synthesis to explore the ecological and evolutionary generalities that govern mycorrhizal functioning in ecosystems.
Cahn, Marjorie A.; Auston, Ione; Selden, Catherine R.; Cogdill, Keith; Baker, Stacy; Cavanaugh, Debra; Elliott, Sterling; Foster, Allison J.; Leep, Carolyn J.; Perez, Debra Joy; Pomietto, Blakely R.
2007-01-01
Objective: The paper provides a complete accounting of the Partners in Information Access for the Public Health Workforce (Partners) initiative since its inception in 1997, including antecedent activities since 1995. Methods: A descriptive overview is provided that is based on a review of meeting summaries, published reports, Websites, project reports, databases, usage statistics, and personal experiences from offices in the National Library of Medicine (NLM), six organizations that collaborate formally with NLM on the Partners initiative, and one outside funding partner. Results: With ten years of experience, the initiative is an effective and unique public-private collaboration that builds on the strengths and needs of the organizations that are involved and the constituencies that they serve. Partners-supported and sponsored projects include satellite broadcasts or Webcasts, training initiatives, Web resource development, a collection of historical literature, and strategies for workforce enumeration and expansion of public health systems research, which provide excellent examples of the benefits realized from collaboration between the public health community and health sciences libraries. Conclusions: With continued funding, existing and new Partners-sponsored projects will be able to fulfill many public health information needs. This collaboration provides excellent opportunities to strengthen the partnership between library science and public health in the use of health information and tools for purposes of improving and protecting the public's health. PMID:17641765
Bala, Adarsh; Gupta, B M
2010-01-01
This study analyses the research output in India in neurosciences during the period 1999-2008 and the analyses included research growth, rank, global publications' share, citation impact, share of international collaborative papers and major collaborative partner countries and patterns of research communication in most productive journals. It also analyses the characteristics of most productive institutions, authors and high-cited papers. The publication output and impact of India is also compared with China, Brazil and South Korea. Scopus Citation database was used for retrieving the publications' output of India and other countries in neurosciences during 1999-2008. India's global publications' share in neurosciences during the study period was 0.99% (with 4503 papers) and it ranked 21 st among the top 26 countries in neurosciences. The average annual publication growth rate was 11.37%, shared 17.34% of international collaborative papers and the average citation per paper was 4.21. India was far behind China, Brazil and South Korea in terms of publication output, citation quality and share of international collaborative papers in neurosciences. India is far behind in terms of publication output, citation quality and share of international collaborative papers in neurosciences when compared to other countries with an emerging economy. There is an urgent need to substantially increase the research activities in the field of neurosciences in India.
Cahn, Marjorie A; Auston, Ione; Selden, Catherine R; Cogdill, Keith; Baker, Stacy; Cavanaugh, Debra; Elliott, Sterling; Foster, Allison J; Leep, Carolyn J; Perez, Debra Joy; Pomietto, Blakely R
2007-07-01
The paper provides a complete accounting of the Partners in Information Access for the Public Health Workforce (Partners) initiative since its inception in 1997, including antecedent activities since 1995. A descriptive overview is provided that is based on a review of meeting summaries, published reports, Websites, project reports, databases, usage statistics, and personal experiences from offices in the National Library of Medicine (NLM), six organizations that collaborate formally with NLM on the Partners initiative, and one outside funding partner. With ten years of experience, the initiative is an effective and unique public-private collaboration that builds on the strengths and needs of the organizations that are involved and the constituencies that they serve. Partners-supported and sponsored projects include satellite broadcasts or Webcasts, training initiatives, Web resource development, a collection of historical literature, and strategies for workforce enumeration and expansion of public health systems research, which provide excellent examples of the benefits realized from collaboration between the public health community and health sciences libraries. With continued funding, existing and new Partners-sponsored projects will be able to fulfill many public health information needs. This collaboration provides excellent opportunities to strengthen the partnership between library science and public health in the use of health information and tools for purposes of improving and protecting the public's health.
Development Of New Databases For Tsunami Hazard Analysis In California
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.
2009-12-01
The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) • Repository of national hazard models • Uniform global hazard model Armed with these tools and databases, stakeholders worldwide will then be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Earthquake hazard information will be able to be combined with data on exposure (buildings, population) and data on their vulnerability, for risk assessment around the globe. Furthermore, for a truly integrated view of seismic risk, users will be able to add social vulnerability and resilience indices and estimate the costs and benefits of different risk management measures. Having finished its first five-year Work Program at the end of 2013, GEM has entered into its second five-year Work Program 2014-2018. Beyond maintaining and enhancing the products developed in Work Program 1, the second phase will have a stronger focus on regional hazard and risk activities, and on seeing GEM products used for risk assessment and risk management practice at regional, national and local scales. Furthermore GEM intends to partner with similar initiatives underway for other natural perils, which together are needed to meet the need for advanced risk assessment methods, tools and data to underpin global disaster risk reduction efforts under the Hyogo Framework for Action #2 to be launched in Sendai/Japan in spring 2015
Wittenberg, Yvette; Kwekkeboom, Rick; Staaks, Janneke; Verhoeff, Arnoud; de Boer, Alice
2017-12-18
This scoping review focuses on the views of informal caregivers regarding the division of care responsibilities between citizens, governments and professionals and the question of to what extent professionals take these views into account during collaboration with them. In Europe, the normative discourse on informal care has changed. Retreating governments and decreasing residential care increase the need to enhance the collaboration between informal caregivers and professionals. Professionals are assumed to adequately address the needs and wishes of informal caregivers, but little is known about informal caregivers' views on the division of care responsibilities. We performed a scoping review and searched for relevant studies published between 2000 and September 1, 2016 in seven databases. Thirteen papers were included, all published in Western countries. Most included papers described research with a qualitative research design. Based on the opinion of informal caregivers, we conclude that professionals do not seem to explicitly take into account the views of informal caregivers about the division of responsibilities during their collaboration with them. Roles of the informal caregivers and professionals are not always discussed and the division of responsibilities sometimes seems unclear. Acknowledging the role and expertise of informal caregivers seems to facilitate good collaboration, as well as attitudes such as professionals being open and honest, proactive and compassionate. Inflexible structures and services hinder good collaboration. Asking informal caregivers what their opinion is about the division of responsibilities could improve clarity about the care that is given by both informal caregivers and professionals and could improve their collaboration. Educational programs in social work, health and allied health professions should put more emphasis on this specific characteristic of collaboration. © 2017 The Authors. Health and Social Care in the Community Published by John Wiley & Sons Ltd.
Petti, C. A.; Arnold, C.; Miro, J. M.; Pericàs, J. M.; Garcia de la Maria, C.; Kanafani, Z.; Baddley, J.; Wray, D.; Klein, J. L.; Delahaye, F.; Fernandez-Hidalgo, N.; Hannan, M. M.; Murdoch, D.; Bayer, A.; Chu, V. H.
2016-01-01
The phenotypic expression of methicillin resistance among coagulase-negative staphylococci (CoNS) is heterogeneous regardless of the presence of the mecA gene. The potential discordance between phenotypic and genotypic results has led to the use of vancomycin for the treatment of CoNS infective endocarditis (IE) regardless of methicillin MIC values. In this study, we assessed the outcome of methicillin-susceptible CoNS IE among patients treated with antistaphylococcal β-lactams (ASB) versus vancomycin (VAN) in a multicenter cohort study based on data from the International Collaboration on Endocarditis (ICE) Prospective Cohort Study (PCS) and the ICE-Plus databases. The ICE-PCS database contains prospective data on 5,568 patients with IE collected between 2000 and 2006, while the ICE-Plus database contains prospective data on 2,019 patients with IE collected between 2008 and 2012. The primary endpoint was in-hospital mortality. Secondary endpoints were 6-month mortality and survival time. Of the 7,587 patients in the two databases, there were 280 patients with methicillin-susceptible CoNS IE. Detailed treatment and outcome data were available for 180 patients. Eighty-eight patients received ASB, while 36 were treated with VAN. In-hospital mortality (19.3% versus 11.1%; P = 0.27), 6-month mortality (31.6% versus 25.9%; P = 0.58), and survival time after discharge (P = 0.26) did not significantly differ between the two cohorts. Cox regression analysis did not show any significant association between ASB use and the survival time (hazard ratio, 1.7; P = 0.22); this result was not affected by adjustment for confounders. This study provides no evidence for a difference in outcome with the use of VAN versus ASB for methicillin-susceptible CoNS IE. PMID:27527083
Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools
NASA Astrophysics Data System (ADS)
Brocko, V. R.; Varner, J.
2007-12-01
A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and display data from the Oracle database (selection forms, Web Map Services, and Web Feature Services). In addition, the historic tsunami archive (along with related earthquakes and volcanic eruptions) is available in KML (Keyhole Markup Language) format for use with Google Earth and similar geo-viewers.
Semantically Enabling Knowledge Representation of Metamorphic Petrology Data
NASA Astrophysics Data System (ADS)
West, P.; Fox, P. A.; Spear, F. S.; Adali, S.; Nguyen, C.; Hallett, B. W.; Horkley, L. K.
2012-12-01
More and more metamorphic petrology data is being collected around the world, and is now being organized together into different virtual data portals by means of virtual organizations. For example, there is the virtual data portal Petrological Database (PetDB, http://www.petdb.org) of the Ocean Floor that is organizing scientific information about geochemical data of ocean floor igneous and metamorphic rocks; and also The Metamorphic Petrology Database (MetPetDB, http://metpetdb.rpi.edu) that is being created by a global community of metamorphic petrologists in collaboration with software engineers and data managers at Rensselaer Polytechnic Institute. The current focus is to provide the ability for scientists and researchers to register their data and search the databases for information regarding sample collections. What we present here is the next step in evolution of the MetPetDB portal, utilizing semantically enabled features such as discovery, data casting, faceted search, knowledge representation, and linked data as well as organizing information about the community and collaboration within the virtual community itself. We take the information that is currently represented in a relational database and make it available through web services, SPARQL endpoints, semantic and triple-stores where inferencing is enabled. We will be leveraging research that has taken place in virtual observatories, such as the Virtual Solar Terrestrial Observatory (VSTO) and the Biological and Chemical Oceanography Data Management Office (BCO-DMO); vocabulary work done in various communities such as Observations and Measurements (ISO 19156), FOAF (Friend of a Friend), Bibo (Bibliography Ontology), and domain specific ontologies; enabling provenance traces of samples and subsamples using the different provenance ontologies; and providing the much needed linking of data from the various research organizations into a common, collaborative virtual observatory. In addition to better representing and presenting the actual data, we also look to organize and represent the knowledge information and expertise behind the data. Domain experts hold a lot of knowledge in their minds, in their presentations and publications, and elsewhere. Not only is this a technical issue, this is also a social issue in that we need to be able to encourage the domain experts to share their knowledge in a way that can be searched and queried over. With this additional focus in MetPetDB the site can be used more efficiently by other domain experts, but can also be utilized by non-specialists as well in order to educate people of the importance of the work being done as well as enable future domain experts.
Chien, Tsair-Wei; Chang, Yu; Wang, Hsien-Yi
2018-02-01
Many researchers used National Health Insurance database to publish medical papers which are often retrospective, population-based, and cohort studies. However, the author's research domain and academic characteristics are still unclear.By searching the PubMed database (Pubmed.com), we used the keyword of [Taiwan] and [National Health Insurance Research Database], then downloaded 2913 articles published from 1995 to 2017. Social network analysis (SNA), Gini coefficient, and Google Maps were applied to gather these data for visualizing: the most productive author; the pattern of coauthor collaboration teams; and the author's research domain denoted by abstract keywords and Pubmed MESH (medical subject heading) terms.Utilizing the 2913 papers from Taiwan's National Health Insurance database, we chose the top 10 research teams shown on Google Maps and analyzed one author (Dr. Kao) who published 149 papers in the database in 2015. In the past 15 years, we found Dr. Kao had 2987 connections with other coauthors from 13 research teams. The cooccurrence abstract keywords with the highest frequency are cohort study and National Health Insurance Research Database. The most coexistent MESH terms are tomography, X-ray computed, and positron-emission tomography. The strength of the author research distinct domain is very low (Gini < 0.40).SNA incorporated with Google Maps and Gini coefficient provides insight into the relationships between entities. The results obtained in this study can be applied for a comprehensive understanding of other productive authors in the field of academics.
Longitudinal data for interdisciplinary ageing research. Design of the Linnaeus Database.
Malmberg, Gunnar; Nilsson, Lars-Göran; Weinehall, Lars
2010-11-01
To allow for interdisciplinary research on the relations between socioeconomic conditions and health in the ageing population, a new anonymized longitudinal database - the Linnaeus Database - has been developed at the Centre for Population Studies at Umeå University. This paper presents the database and its research potential. Using the Swedish personal numbers the researchers have, in collaboration with Statistics Sweden and the National Board for Health and Welfare, linked individual records from Swedish register data on death causes, hospitalization and various socioeconomic conditions with two databases - Betula and VIP (Västerbottens Intervention Programme) - previously developed by the researchers at Umeå University. Whereas Betula includes rich information about e.g. cognitive functions, VIP contains information about e.g. lifestyle and health indicators. The Linnaeus Database includes annually updated socioeconomic information from Statistics Sweden registers for all registered residents of Sweden for the period 1990 to 2006, in total 12,066,478. The information from the Betula includes 4,500 participants from the city of Umeå and VIP includes data for almost 90,000 participants. Both datasets include cross-sectional as well as longitudinal information. Due to the coverage and rich information, the Linnaeus Database allows for a variety of longitudinal studies on the relations between, for instance, socioeconomic conditions, health, lifestyle, cognition, family networks, migration and working conditions in ageing cohorts. By joining various datasets developed in different disciplinary traditions new possibilities for interdisciplinary research on ageing emerge.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-01-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
WikiPathways: a multifaceted pathway database bridging metabolomics to other omics research.
Slenter, Denise N; Kutmon, Martina; Hanspers, Kristina; Riutta, Anders; Windsor, Jacob; Nunes, Nuno; Mélius, Jonathan; Cirillo, Elisa; Coort, Susan L; Digles, Daniela; Ehrhart, Friederike; Giesbertz, Pieter; Kalafati, Marianthi; Martens, Marvin; Miller, Ryan; Nishida, Kozo; Rieswijk, Linda; Waagmeester, Andra; Eijssen, Lars M T; Evelo, Chris T; Pico, Alexander R; Willighagen, Egon L
2018-01-04
WikiPathways (wikipathways.org) captures the collective knowledge represented in biological pathways. By providing a database in a curated, machine readable way, omics data analysis and visualization is enabled. WikiPathways and other pathway databases are used to analyze experimental data by research groups in many fields. Due to the open and collaborative nature of the WikiPathways platform, our content keeps growing and is getting more accurate, making WikiPathways a reliable and rich pathway database. Previously, however, the focus was primarily on genes and proteins, leaving many metabolites with only limited annotation. Recent curation efforts focused on improving the annotation of metabolism and metabolic pathways by associating unmapped metabolites with database identifiers and providing more detailed interaction knowledge. Here, we report the outcomes of the continued growth and curation efforts, such as a doubling of the number of annotated metabolite nodes in WikiPathways. Furthermore, we introduce an OpenAPI documentation of our web services and the FAIR (Findable, Accessible, Interoperable and Reusable) annotation of resources to increase the interoperability of the knowledge encoded in these pathways and experimental omics data. New search options, monthly downloads, more links to metabolite databases, and new portals make pathway knowledge more effortlessly accessible to individual researchers and research communities. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
SInCRe—structural interactome computational resource for Mycobacterium tuberculosis
Metri, Rahul; Hariharaputran, Sridhar; Ramakrishnan, Gayatri; Anand, Praveen; Raghavender, Upadhyayula S.; Ochoa-Montaño, Bernardo; Higueruelo, Alicia P.; Sowdhamini, Ramanathan; Chandra, Nagasuma R.; Blundell, Tom L.; Srinivasan, Narayanaswamy
2015-01-01
We have developed an integrated database for Mycobacterium tuberculosis H37Rv (Mtb) that collates information on protein sequences, domain assignments, functional annotation and 3D structural information along with protein–protein and protein–small molecule interactions. SInCRe (Structural Interactome Computational Resource) is developed out of CamBan (Cambridge and Bangalore) collaboration. The motivation for development of this database is to provide an integrated platform to allow easily access and interpretation of data and results obtained by all the groups in CamBan in the field of Mtb informatics. In-house algorithms and databases developed independently by various academic groups in CamBan are used to generate Mtb-specific datasets and are integrated in this database to provide a structural dimension to studies on tuberculosis. The SInCRe database readily provides information on identification of functional domains, genome-scale modelling of structures of Mtb proteins and characterization of the small-molecule binding sites within Mtb. The resource also provides structure-based function annotation, information on small-molecule binders including FDA (Food and Drug Administration)-approved drugs, protein–protein interactions (PPIs) and natural compounds that bind to pathogen proteins potentially and result in weakening or elimination of host–pathogen protein–protein interactions. Together they provide prerequisites for identification of off-target binding. Database URL: http://proline.biochem.iisc.ernet.in/sincre PMID:26130660
FDA toxicity databases and real-time data entry.
Arvidson, Kirk B
2008-11-15
Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.
Should we search Chinese biomedical databases when performing systematic reviews?
Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Spijker, René; Bossuyt, Patrick M
2015-03-06
Chinese biomedical databases contain a large number of publications available to systematic reviewers, but it is unclear whether they are used for synthesizing the available evidence. We report a case of two systematic reviews on the accuracy of anti-cyclic citrullinated peptide for diagnosing rheumatoid arthritis. In one of these, the authors did not search Chinese databases; in the other, they did. We additionally assessed the extent to which Cochrane reviewers have searched Chinese databases in a systematic overview of the Cochrane Library (inception to 2014). The two diagnostic reviews included a total of 269 unique studies, but only 4 studies were included in both reviews. The first review included five studies published in the Chinese language (out of 151) while the second included 114 (out of 118). The summary accuracy estimates from the two reviews were comparable. Only 243 of the published 8,680 Cochrane reviews (less than 3%) searched one or more of the five major Chinese databases. These Chinese databases index about 2,500 journals, of which less than 6% are also indexed in MEDLINE. All 243 Cochrane reviews evaluated an intervention, 179 (74%) had at least one author with a Chinese affiliation; 118 (49%) addressed a topic in complementary or alternative medicine. Although searching Chinese databases may lead to the identification of a large amount of additional clinical evidence, Cochrane reviewers have rarely included them in their search strategy. We encourage future initiatives to evaluate more systematically the relevance of searching Chinese databases, as well as collaborative efforts to allow better incorporation of Chinese resources in systematic reviews.
ABrowse--a customizable next-generation genome browser framework.
Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge
2012-01-05
With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.
Davis, Allan Peter; Wiegers, Thomas C.; Roberts, Phoebe M.; King, Benjamin L.; Lay, Jean M.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J.; Enayetallah, Ahmed E.; Mattingly, Carolyn J.
2013-01-01
Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88 629 articles relating over 1 200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 2 54 173 toxicogenomic interactions (1 52 173 chemical–disease, 58 572 chemical–gene, 5 345 gene–disease and 38 083 phenotype interactions). All chemical–gene–disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer’s text-mining process to collate the articles, and CTD’s curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug–disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades’ worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/ PMID:24288140
Davis, Allan Peter; Wiegers, Thomas C; Roberts, Phoebe M; King, Benjamin L; Lay, Jean M; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J; Enayetallah, Ahmed E; Mattingly, Carolyn J
2013-01-01
Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88,629 articles relating over 1,200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 254,173 toxicogenomic interactions (152,173 chemical-disease, 58,572 chemical-gene, 5,345 gene-disease and 38,083 phenotype interactions). All chemical-gene-disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer's text-mining process to collate the articles, and CTD's curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug-disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades' worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/
An open source web interface for linking models to infrastructure system databases
NASA Astrophysics Data System (ADS)
Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.
2016-12-01
Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.
Yellow fever vaccine-associated viscerotropic disease: current perspectives
Thomas, Roger E
2016-01-01
Purpose To assess those published cases of yellow fever (YF) vaccine-associated viscerotropic disease that meet the Brighton Collaboration criteria and to assess the safety of YF vaccine with respect to viscerotropic disease. Literature search Ten electronic databases were searched with no restriction of date or language and reference lists of retrieved articles. Methods All abstracts and titles were independently read by two reviewers and data independently entered by two reviewers. Results All serious adverse events that met the Brighton Classification criteria were associated with first YF vaccinations. Sixty-two published cases (35 died) met the Brighton Collaboration viscerotropic criteria, with 32 from the US, six from Brazil, five from Peru, three from Spain, two from the People’s Republic of China, one each from Argentina, Australia, Belgium, Ecuador, France, Germany, Ireland, New Zealand, Portugal, and the UK, and four with no country stated. Two cases met both the viscerotropic and YF vaccine-associated neurologic disease criteria. Seventy cases proposed by authors as viscerotropic disease did not meet any Brighton Collaboration viscerotropic level of diagnostic certainty or any YF vaccine-associated viscerotropic disease causality criteria (37 died). Conclusion Viscerotropic disease is rare in the published literature and in pharmacovigilance databases. All published cases were from developing countries. Because the symptoms are usually very severe and life threatening, it is unlikely that cases would not come to medical attention (but might not be published). Because viscerotropic disease has a highly predictable pathologic course, it is likely that viscerotropic disease post-YF vaccine occurs in low-income countries with the same incidence as in developing countries. YF vaccine is a very safe vaccine that likely confers lifelong immunity. PMID:27784992
González-Alcaide, Gregorio; Castelló-Cogollos, Lourdes; Castellano-Gómez, Miguel; Agullo-Calatayud, Víctor; Aleixandre-Benavent, Rafael; Alvarez, Francisco Javier; Valderrama-Zurián, Juan Carlos
2013-01-01
The research of alcohol consumption-related problems is a multidisciplinary field. The aim of this study is to analyze the worldwide scientific production in the area of alcohol-drinking and alcohol-related problems from 2005 to 2009. A MEDLINE and Scopus search on alcohol (alcohol-drinking and alcohol-related problems) published from 2005 to 2009 was carried out. Using bibliometric indicators, the distribution of the publications was determined within the journals that publish said articles, specialty of the journal (broad subject terms), article type, language of the publication, and country where the journal is published. Also, authorship characteristics were assessed (collaboration index and number of authors who have published more than 9 documents). The existing research groups were also determined. About 24,100 documents on alcohol, published in 3,862 journals, and authored by 69,640 authors were retrieved from MEDLINE and Scopus between the years 2005 and 2009. The collaboration index of the articles was 4.83 ± 3.7. The number of consolidated research groups in the field was identified as 383, with 1,933 authors. Documents on alcohol were published mainly in journals covering the field of "Substance-Related Disorders," 23.18%, followed by "Medicine," 8.7%, "Psychiatry," 6.17%, and "Gastroenterology," 5.25%. Research on alcohol is a consolidated field, with an average of 4,820 documents published each year between 2005 and 2009 in MEDLINE and Scopus. Alcohol-related publications have a marked multidisciplinary nature. Collaboration was common among alcohol researchers. There is an underrepresentation of alcohol-related publications in languages other than English and from developing countries, in MEDLINE and Scopus databases. Copyright © 2012 by the Research Society on Alcoholism.
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration
Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.
2012-01-01
Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861
Synthesizing and databasing fossil calibrations: divergence dating and beyond
Ksepka, Daniel T.; Benton, Michael J.; Carrano, Matthew T.; Gandolfo, Maria A.; Head, Jason J.; Hermsen, Elizabeth J.; Joyce, Walter G.; Lamm, Kristin S.; Patané, José S. L.; Phillips, Matthew J.; Polly, P. David; Van Tuinen, Marcel; Ware, Jessica L.; Warnock, Rachel C. M.; Parham, James F.
2011-01-01
Divergence dating studies, which combine temporal data from the fossil record with branch length data from molecular phylogenetic trees, represent a rapidly expanding approach to understanding the history of life. National Evolutionary Synthesis Center hosted the first Fossil Calibrations Working Group (3–6 March, 2011, Durham, NC, USA), bringing together palaeontologists, molecular evolutionists and bioinformatics experts to present perspectives from disciplines that generate, model and use fossil calibration data. Presentations and discussions focused on channels for interdisciplinary collaboration, best practices for justifying, reporting and using fossil calibrations and roadblocks to synthesis of palaeontological and molecular data. Bioinformatics solutions were proposed, with the primary objective being a new database for vetted fossil calibrations with linkages to existing resources, targeted for a 2012 launch. PMID:21525049
Reasons for 2011 Release of the Evaluated Nuclear Data Library (ENDL2011.0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D.; Escher, J.; Hoffman, R.
LLNL's Computational Nuclear Physics Group and Nuclear Theory and Modeling Group have collaborated to create the 2011 release of the Evaluated Nuclear Data Library (ENDL2011). ENDL2011 is designed to sup- port LLNL's current and future nuclear data needs. This database is currently the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles, surpassing ENDL2009.0 [1]. The ENDL2011 release [2] contains 918 transport-ready eval- uations in the neutron sub-library alone. ENDL2011 was assembled with strong support from the ASC program, leveraged with support from NNSA science campaigns and the DOE/Offce of Science US Nuclear Datamore » Pro- gram.« less
2005-04-12
Hardware, Database, and Operating System independence using Java • Enterprise-class Architecture using Java2 Enterprise Edition 1.4 • Standards based...portal applications. Compliance with the Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote Portals...authentication and authorization • Portal Standards using Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote
An Analysis of Transmission and Storage Gains from Sliding Checksum Methods
1998-11-01
analysed a protocol "rsync" for synchronising related files at different ends of a communications channel with a minimum of transmitted data. This report...Sliding Checksum Methods Executive Summary In a previous report we described, modelled and analysed a protocol "rsync" for synchronising related...collaborative writing of documentation and synchronisation of distributed databases in the situation where no one location is aware of the differences
The Geant4 physics validation repository
NASA Astrophysics Data System (ADS)
Wenzel, H.; Yarba, J.; Dotti, A.
2015-12-01
The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. The functionality of these components and the technology choices we made are also described.
Using a Role-Play Simulation Game to Promote Systems Thinking.
Young, Judith
2018-01-01
Learning is a dynamic process where the learner discovers new knowledge and constructs new insights. The "Friday Night in the ER" © role-play simulation game facilitates system thinking, data-based decision making, and collaboration. Nurse educators in academe and health care settings can use this game to practice the essential skills of nurse professionals. J Contin Nurs Educ. 2018;49(1):10-11. Copyright 2018, SLACK Incorporated.
2013-10-29
COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...based on contextual information, 3) develop vision-based techniques for learning of contextual information, and detection and identification of...that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense
Collaborative Interactive Visualization Exploratory Concept
2015-06-01
the FIAC concepts. It consists of various DRDC-RDDC-2015-N004 intelligence analysis web services build of top of big data technologies exploited...sits on the UDS where validated common knowledge is stored. Based on the Lumify software2, this important component exploits big data technologies such...interfaces. Above this database resides the Big Data Manager responsible for transparent data transmission between the UDS and the rest of the S3
Report of the Interagency biological methods workshop
Gurtz, Martin E.; Muir, Thomas A.
1994-01-01
The U.S. Geological Survey hosted the Interagency Biological Methods Workshop in Reston, Virginia, during June 22-23, 1993. The purposes of the workshop were to (1) promote better communication among Federal agencies that are using or developing biological methods in water-quality assessment programs for streams and rivers, and (2) facilitate the sharing of data and interagency collaboration. The workshop was attended by 45 biologists representing numerous Federal agencies and programs, and a few regional and State programs that were selected to provide additional perspectives. The focus of the workshop was community assessment methods for fish, invertebrates, and algae; physical habitat characterization; and chemical analyses of biological tissues. Charts comparing program objectives, design features, and sampling methods were compiled from materials that were provided by participating agencies prior to the workshop and formed the basis for small workgroup discussions. Participants noted that differences in methods among programs were often necessitated by differences in program objectives. However, participants agreed that where programs have identified similar data needs, the use of common methods is beneficial. Opportunities discussed for improving data compatibility and information sharing included (1) modifying existing methods, (2) adding parameters, (3) improving access to data through shared databases (potentially with common database structures), and (4) future collaborative efforts that range from research on selected protocol questions to followup meetings and continued discussions.
Hall, Michael; Robertson, Jamie; Merkel, Matthias; Aziz, Michael; Hutchens, Michael
2017-08-01
Serious complications are common during the intensive care of postoperative cardiac surgery patients. Some of these complications may be influenced by communication during the process of handover of care from the operating room to the intensive care unit (ICU) team. A structured transfer of care process may reduce the rate of communication errors and perioperative complications. We hypothesized that a collaborative, comprehensive, structured handover of care from the intraoperative team to the ICU team would reduce a specific set of postoperative complications. We tested this hypothesis by developing and introducing a comprehensive multidisciplinary transfer of care process. We measured patient outcomes before and after the intervention using a linkage between 2 care databases: an Anesthesia Information Management System and a critical care complication registry database. There were 1127 total postoperative cardiac surgery admissions during the study period, 550 before and 577 after the intervention. There was no statistical difference between overall complications before and after the intervention (P = .154). However, there was a statistically significant reduction in preventable complications after the intervention (P = .023). The main finding of this investigation is that the introduction of a collaborative, comprehensive transfer of care process from the operating room to the ICU was associated with patients suffering fewer preventable complications.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
Molecular nutrition research: the modern way of performing nutritional science.
Norheim, Frode; Gjelstad, Ingrid Merethe Fange; Hjorth, Marit; Vinknes, Kathrine J; Langleite, Torgrim M; Holen, Torgeir; Jensen, Jørgen; Dalen, Knut Tomas; Karlsen, Anette S; Kielland, Anders; Rustan, Arild C; Drevon, Christian A
2012-12-03
In spite of amazing progress in food supply and nutritional science, and a striking increase in life expectancy of approximately 2.5 months per year in many countries during the previous 150 years, modern nutritional research has a great potential of still contributing to improved health for future generations, granted that the revolutions in molecular and systems technologies are applied to nutritional questions. Descriptive and mechanistic studies using state of the art epidemiology, food intake registration, genomics with single nucleotide polymorphisms (SNPs) and epigenomics, transcriptomics, proteomics, metabolomics, advanced biostatistics, imaging, calorimetry, cell biology, challenge tests (meals, exercise, etc.), and integration of all data by systems biology, will provide insight on a much higher level than today in a field we may name molecular nutrition research. To take advantage of all the new technologies scientists should develop international collaboration and gather data in large open access databases like the suggested Nutritional Phenotype database (dbNP). This collaboration will promote standardization of procedures (SOP), and provide a possibility to use collected data in future research projects. The ultimate goals of future nutritional research are to understand the detailed mechanisms of action for how nutrients/foods interact with the body and thereby enhance health and treat diet-related diseases.
Molecular Nutrition Research—The Modern Way Of Performing Nutritional Science
Norheim, Frode; Gjelstad, Ingrid M. F.; Hjorth, Marit; Vinknes, Kathrine J.; Langleite, Torgrim M.; Holen, Torgeir; Jensen, Jørgen; Dalen, Knut Tomas; Karlsen, Anette S.; Kielland, Anders; Rustan, Arild C.; Drevon, Christian A.
2012-01-01
In spite of amazing progress in food supply and nutritional science, and a striking increase in life expectancy of approximately 2.5 months per year in many countries during the previous 150 years, modern nutritional research has a great potential of still contributing to improved health for future generations, granted that the revolutions in molecular and systems technologies are applied to nutritional questions. Descriptive and mechanistic studies using state of the art epidemiology, food intake registration, genomics with single nucleotide polymorphisms (SNPs) and epigenomics, transcriptomics, proteomics, metabolomics, advanced biostatistics, imaging, calorimetry, cell biology, challenge tests (meals, exercise, etc.), and integration of all data by systems biology, will provide insight on a much higher level than today in a field we may name molecular nutrition research. To take advantage of all the new technologies scientists should develop international collaboration and gather data in large open access databases like the suggested Nutritional Phenotype database (dbNP). This collaboration will promote standardization of procedures (SOP), and provide a possibility to use collected data in future research projects. The ultimate goals of future nutritional research are to understand the detailed mechanisms of action for how nutrients/foods interact with the body and thereby enhance health and treat diet-related diseases. PMID:23208524
US Gateway to SIMBAD Astronomical Database
NASA Technical Reports Server (NTRS)
Eichhorn, G.
1998-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. User registration is required by the SIMBAD project in France. Currently, there are almost 3000 US users registered. We also provide user support by answering questions from users and handling requests for lost passwords. We have worked with the CDS SIMBAD project to provide access to the SIMBAD database to US users on an Internet address basis. This will allow most US users to access SIMBAD without having to enter passwords. This new system was installed in August, 1998. The SIMBAD mirror database at SAO is fully operational. We worked with the CDS to adapt it to our computer system. We implemented automatic updating procedures that update the database and password files daily. This mirror database provides much better access to the US astronomical community. We also supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We shipped computer equipment to the meeting and provided support for the demonstration activities at the SIMBAD booth. We continued to improve the cross-linking between the SIMBAD project and the Astro- physics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative called Urania. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers.
IMGT, the International ImMunoGeneTics database.
Lefranc, M P; Giudicelli, V; Busin, C; Bodmer, J; Müller, W; Bontrop, R; Lemaitre, M; Malik, A; Chaume, D
1998-01-01
IMGT, the international ImMunoGeneTics database, is an integrated database specialising in Immunoglobulins (Ig), T cell Receptors (TcR) and Major Histocompatibility Complex (MHC) of all vertebrate species, created by Marie-Paule Lefranc, CNRS, Montpellier II University, Montpellier, France (lefranc@ligm.crbm.cnrs-mop.fr). IMGT includes three databases: LIGM-DB (for Ig and TcR), MHC/HLA-DB and PRIMER-DB (the last two in development). IMGT comprises expertly annotated sequences and alignment tables. LIGM-DB contains more than 23 000 Immunoglobulin and T cell Receptor sequences from 78 species. MHC/HLA-DB contains Class I and Class II Human Leucocyte Antigen alignment tables. An IMGT tool, DNAPLOT, developed for Ig, TcR and MHC sequence alignments, is also available. IMGT works in close collaboration with the EMBL database. IMGT goals are to establish a common data access to all immunogenetics data, including nucleotide and protein sequences, oligonucleotide primers, gene maps and other genetic data of Ig, TcR and MHC molecules, and to provide a graphical user friendly data access. IMGT has important implications in medical research (repertoire in autoimmune diseases, AIDS, leukemias, lymphomas), therapeutical approaches (antibody engineering), genome diversity and genome evolution studies. IMGT is freely available at http://imgt.cnusc.fr:8104 PMID:9399859
Wardlaw, Bruce R.
2008-01-01
This report is a compilation of most of the known fossil locality data from Guadalupe Peak 1:100,000 quadrangle, West Texas. The data represent several major collection efforts over the past century by the Smithsonian Institution, the American Museum of Natural History, and the U.S. Geological Survey. This dataset is not meant to be all inclusive but instead is an attempt to pull together the vast amount of paleontologic data originally collected by Girty (1908) and King (1948), much of which is unpublished and (or) poorly located. The author visited most of the major fossil collection sites to collect for conodonts on a ten-year program funded by the Smithsonian Institution for collaborative research with Richard E. Grant. Guadalupe Mountains National Park occupies the northern part of the quadrangle, and the Park Service has been very helpful over the years in compiling the data and relocating the collection sites. This dataset serves as the prototype for the National Paleontologic Database, part of the National Geologic Map Database Project. The database is intended to be indexed to 1:100,000 quadrangles of the U.S. The minimum number of fields and information within those fields is shown in the report.
Expert system for web based collaborative CAE
NASA Astrophysics Data System (ADS)
Hou, Liang; Lin, Zusheng
2006-11-01
An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.
Compound Passport Service: supporting corporate collection owners in open innovation.
Andrews, David M; Degorce, Sébastien L; Drake, David J; Gustafsson, Magnus; Higgins, Kevin M; Winter, Jon J
2015-10-01
A growing number of early discovery collaborative agreements are being put in place between large pharma companies and partners in which the rights for assets can reside with a partner, exclusively or jointly. Our corporate screening collection, like many others, was built on the premise that compounds generated in-house and not the subject of paper or patent disclosure were proprietary to the company. Collaborative screening arrangements and medicinal chemistry now make the origin, ownership rights and usage of compounds difficult to determine and manage. The Compound Passport Service is a dynamic database, managed and accessed through a set of reusable services that borrows from social media concepts to allow sample owners to take control of their samples in a much more active way. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Collaborative Reasoning Maintenance System for a Reliable Application of Legislations
NASA Astrophysics Data System (ADS)
Tamisier, Thomas; Didry, Yoann; Parisot, Olivier; Feltz, Fernand
Decision support systems are nowadays used to disentangle all kinds of intricate situations and perform sophisticated analysis. Moreover, they are applied in areas where the knowledge can be heterogeneous, partially un-formalized, implicit, or diffuse. The representation and management of this knowledge become the key point to ensure the proper functioning of the system and keep an intuitive view upon its expected behavior. This paper presents a generic architecture for implementing knowledge-base systems used in collaborative business, where the knowledge is organized into different databases, according to the usage, persistence and quality of the information. This approach is illustrated with Cadral, a customizable automated tool built on this architecture and used for processing family benefits applications at the National Family Benefits Fund of the Grand-Duchy of Luxembourg.
NASA's MERBoard: An Interactive Collaborative Workspace Platform. Chapter 4
NASA Technical Reports Server (NTRS)
Trimble, Jay; Wales, Roxana; Gossweiler, Rich
2003-01-01
This chapter describes the ongoing process by which a multidisciplinary group at NASA's Ames Research Center is designing and implementing a large interactive work surface called the MERBoard Collaborative Workspace. A MERBoard system involves several distributed, large, touch-enabled, plasma display systems with custom MERBoard software. A centralized server and database back the system. We are continually tuning MERBoard to support over two hundred scientists and engineers during the surface operations of the Mars Exploration Rover Missions. These scientists and engineers come from various disciplines and are working both in small and large groups over a span of space and time. We describe the multidisciplinary, human-centered process by which this h4ERBoard system is being designed, the usage patterns and social interactions that we have observed, and issues we are currently facing.
Analysis, requirements and development of a collaborative social and medical services data model.
Bobroff, R B; Petermann, C A; Beck, J R; Buffone, G J
1994-01-01
In any medical and social service setting, patient data must be readily shared among multiple providers for delivery of expeditious, quality care. This paper describes the development and implementation of a generalized social and medical services data model for an ambulatory population. The model, part of the Collaborative Social and Medical Services System Project, is based on the data needs of the Baylor College of Medicine Teen Health Clinics and follows the guidelines of the ANSI HISPP/MSDS JWG for a Common Data Model. Design details were determined by informal staff interviews, operational observations, and examination of clinic guidelines and forms. The social and medical services data model is implemented using object-oriented data modeling techniques and will be implemented in C++ using an Object-Oriented Database Management System.
Aspirin use and early age-related macular degeneration: a meta-analysis.
Kahawita, Shyalle K; Casson, Robert J
2014-02-01
The aim of this review was to evaluate the evidence for an association between Aspirin use and early age-related macular degeneration (ARMD). A literature search was performed in 5 databases with no restrictions on language or date of publication. Four studies involving 10292 individuals examining the association between aspirin and ARMD met the inclusion criteria. Meta-analysis was carried out by Cochrane Collaboration Review Manager 5.2 software (Cochrane Collaboration, Copenhagen, Denmark). The pooled odd ratios showed that Aspirin use was associated with early ARMD (pooled odds ratio 1.43, 95% CI 1.09-1.88). There is a small but statistically significant association between Aspirin use and early ARMD, which may warrant further investigation. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
Reprint of: Aspirin use and early age-related macular degeneration: a meta-analysis.
Kahawita, Shyalle K; Casson, Robert J
2015-06-01
The aim of this review was to evaluate the evidence for an association between Aspirin use and early age-related macular degeneration (ARMD). A literature search was performed in 5 databases with no restrictions on language or date of publication. Four studies involving 10292 individuals examining the association between aspirin and ARMD met the inclusion criteria. Meta-analysis was carried out by Cochrane Collaboration Review Manager 5.2 software (Cochrane Collaboration, Copenhagen, Denmark). The pooled odd ratios showed that Aspirin use was associated with early ARMD (pooled odds ratio 1.43, 95% CI 1.09-1.88). There is a small but statistically significant association between Aspirin use and early ARMD, which may warrant further investigation. Copyright © 2015. Published by Elsevier Inc.
The experiences of midwives and nurses collaborating to provide birthing care: a systematic review.
Macdonald, Danielle; Snelgrove-Clarke, Erna; Campbell-Yeo, Marsha; Aston, Megan; Helwig, Melissa; Baker, Kathy A
2015-11-01
Collaboration has been associated with improved health outcomes in maternity care. Collaborative relationships between midwives and physicians have been a focus of literature regarding collaboration in maternity care. However despite the front line role of nurses in the provision of maternity care, there has not yet been a systematic review conducted about the experiences of midwives and nurses collaborating to provide birthing care. The objective of this review was to identify, appraise and synthesize qualitative evidence on the experiences of midwives and nurses collaborating to provide birthing care.Specifically, the review question was: what are the experiences of midwives and nurses collaborating to provide birthing care? This review considered studies that included educated and licensed midwives and nurses with any length of practice. Nurses who work in labor and delivery, postpartum care, prenatal care, public health and community health were included in this systematic review.This review considered studies that investigated the experiences of midwives and nurses collaborating during the provision of birthing care. Experiences, of any duration, included any interactions between midwives and nurses working in collaboration to provide birthing care.Birthing care referred to: (a) supportive care throughout the pregnancy, labor, delivery and postpartum, (b) administrative tasks throughout the pregnancy, labor, delivery and postpartum, and (c) clinical skills throughout the pregnancy, labor, delivery and postpartum. The postpartum period included the six weeks after delivery.The review considered English language studies that focused on qualitative data including, but not limited to, designs such as phenomenology, grounded theory, ethnography, action research and feminist research.This review considered qualitative studies that explored the experiences of collaboration in areas where midwives and nurses work together. Examples of these areas included: hospitals, birth centers, client homes, health clinics and other public or community health settings. These settings were located in any country, cultural context, or geographical location. A three-step search strategy was used to identify relevant published and unpublished studies. English papers from 1981 onwards were considered. The following databases were searched: Anthrosource, CENTRAL (The Cochrane Library), CINAHL, EMBASE, PsycINFO, PubMed, Social Services Abstracts and Sociological Abstracts. In addition to the databases, several grey literature sources were searched. Papers that were selected for retrieval were independently assessed for inclusion in the review by two JBI-trained reviewers. The two reviewers used a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Qualitative data were extracted from papers included in the review using the standardized data extraction tool from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Once qualitative studies were assessed using the the Joanna Briggs Institute Qualitative Assessment and Review Instrument critical appraisal tool, findings of the included studies were extracted. These findings were aggregated into categories according to their similarity in meaning. These categories were then subjected to a meta-synthesis to produce a comprehensive set of synthesized findings. Five studies were included in the review. Thirty-eight findings were extracted from the included studies and were aggregated into five categories. The five categories were synthesized into two synthesized findings. The two synthesized findings were:Synthesized finding1: Negative experiences of collaboration between nurses and midwives may be influenced by distrust, lack of clear roles, or unprofessional or inconsiderate behavior.Synthesized finding 2: If midwives and nurses have positive experiences collaborating thenthere is hope that the challenges of collaboration can be overcome. Qualitative evidence about the experiences of midwives and nurses collaborating to provide birthing care was identified, appraised and synthesized. Two synthesized findings were created from the findings of the five included studies. Midwives and nurses had negative experiences of collaboration which may be influenced by: distrust, unclear roles, or a lack of professionalism or consideration. Midwives and nurses had positive experiences of teamwork which can be a source of hope for overcoming the challenges of sharing care.There is clearly a gap in the literature about the collaborative experiences of midwives and nurses, given that only five studies were located for inclusion in the systematic review. More qualitative research exploring collaboration as a process and the interactional dynamics of midwives and nurses in a variety of practice and professional contexts is required.Distrust, unclear roles, and lack of professionalism and consideration must all be addressed. Strategies that address and minimize the occurrences of these three elements need to be developed and implemented in an effort to reduce negative collaborative experiences for midwives and nurses. Postive experiences of teamwork must be acknowleged and celebrated, and the challenges that sharing care present must be understood as a part of the collaborative process.More qualitative research is required to explore the collaborative process between midwives and nurses. Further exploration of their interactional dynamics, their relationship between power and collaboration, and the experiences of collaboration in a variety of professional and practice contexts is recommended.
Organizing Diverse, Distributed Project Information
NASA Technical Reports Server (NTRS)
Keller, Richard M.
2003-01-01
SemanticOrganizer is a software application designed to organize and integrate information generated within a distributed organization or as part of a project that involves multiple, geographically dispersed collaborators. SemanticOrganizer incorporates the capabilities of database storage, document sharing, hypermedia navigation, and semantic-interlinking into a system that can be customized to satisfy the specific information-management needs of different user communities. The program provides a centralized repository of information that is both secure and accessible to project collaborators via the World Wide Web. SemanticOrganizer's repository can be used to collect diverse information (including forms, documents, notes, data, spreadsheets, images, and sounds) from computers at collaborators work sites. The program organizes the information using a unique network-structured conceptual framework, wherein each node represents a data record that contains not only the original information but also metadata (in effect, standardized data that characterize the information). Links among nodes express semantic relationships among the data records. The program features a Web interface through which users enter, interlink, and/or search for information in the repository. By use of this repository, the collaborators have immediate access to the most recent project information, as well as to archived information. A key advantage to SemanticOrganizer is its ability to interlink information together in a natural fashion using customized terminology and concepts that are familiar to a user community.
Conservation of biodiversity through taxonomy, data publication, and collaborative infrastructures.
Costello, Mark J; Vanhoorne, Bart; Appeltans, Ward
2015-08-01
Taxonomy is the foundation of biodiversity science because it furthers discovery of new species. Globally, there have never been so many people involved in naming species new to science. The number of new marine species described per decade has never been greater. Nevertheless, it is estimated that tens of thousands of marine species, and hundreds of thousands of terrestrial species, are yet to be discovered; many of which may already be in specimen collections. However, naming species is only a first step in documenting knowledge about their biology, biogeography, and ecology. Considering the threats to biodiversity, new knowledge of existing species and discovery of undescribed species and their subsequent study are urgently required. To accelerate this research, we recommend, and cite examples of, more and better communication: use of collaborative online databases; easier access to knowledge and specimens; production of taxonomic revisions and species identification guides; engagement of nonspecialists; and international collaboration. "Data-sharing" should be abandoned in favor of mandated data publication by the conservation science community. Such a step requires support from peer reviewers, editors, journals, and conservation organizations. Online data publication infrastructures (e.g., Global Biodiversity Information Facility, Ocean Biogeographic Information System) illustrate gaps in biodiversity sampling and may provide common ground for long-term international collaboration between scientists and conservation organizations. © 2015 Society for Conservation Biology.
Wiki use in mental health practice: recognizing potential use of collaborative technology.
Bastida, Richard; McGrath, Ian; Maude, Phil
2010-04-01
Web 2.0, the second-generation of the World Wide Web, differs to earlier versions of Web development and design in that it facilitates more user-friendly, interactive information sharing and mechanisms for greater collaboration between users. Examples of Web 2.0 include Web-based communities, hosted services, social networking sites, video sharing sites, blogs, mashups, and wikis. Users are able to interact with others across the world or to add to or change website content. This paper examines examples of wiki use in the Australian mental health sector. A wiki can be described as an online collaborative and interactive database that can be easily edited by users. They are accessed via a standard Web browser which has an interface similar to traditional Web pages, thus do not require special application or software for the user. Although there is a paucity of literature describing wiki use in mental health, other industries have developed uses, including a repository of knowledge, a platform for collaborative writing, a project management tool, and an alternative to traditional Web pages or Intranets. This paper discusses the application of wikis in other industries and offers suggestions by way of examples of how this technology could be used in the mental health sector.
[Analysis of Spanish research collaboration in emergency medicine: 2010-2014].
Burbano Santos, Pablo; Fernández-Guerrero, Inés María; Martín-Sánchez, Francisco Javier; Burillo, Guillermo; Miró, Òscar
2017-10-01
To describe the structure of the Spanish emergency medicine research network or networks, researchers' roles, and patterns of collaboration between hospitals. The search for publications was carried out in the SCOPUS database for the 5-year period of 2010 to 2014. We used network analysis software to map ties between researchers and hospitals that had established at least 5 and 10 relationships, respectively, during the period under study. We calculated indicators of degree of centrality for individual scientists and hospitals and tabulated data for the main authors and centers. We identified 2626 articles with 12 372 different authors at 1134 hospitals in 75 countries. The largest number of international relations were with France, the United States, and the United Kingdom. Authors had established 93 687 connections that allowed us to identify 23 collaborating groups, the largest of which was comprised of 30 individuals. We also found 12 855 connections between hospitals that identified a single subnetwork of collaboration comprised of 19 hospitals, 1 of which was in Switzerland. Measures of centrality, intermediation, and proximity led to classification of the most important members of author and hospital networks. This analysis of research networks in emergency medicine has afforded the first details describing the relationships maintained by Spanish scientists and hospitals.
You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong
2017-01-01
It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.
Haytowitz, David B; Pehrsson, Pamela R
2018-01-01
For nearly 20years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture's (USDA) food composition databases (FCDB) through the collection and analysis of nationally representative food samples. NFNAP employs statistically valid sampling plans, the Key Foods approach to identify and prioritize foods and nutrients, comprehensive quality control protocols, and analytical oversight to generate new and updated analytical data for food components. NFNAP has allowed the Nutrient Data Laboratory to keep up with the dynamic US food supply and emerging scientific research. Recently generated results for nationally representative food samples show marked changes compared to previous database values for selected nutrients. Monitoring changes in the composition of foods is critical in keeping FCDB up-to-date, so that they remain a vital tool in assessing the nutrient intake of national populations, as well as for providing dietary advice. Published by Elsevier Ltd.
CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON ...
Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database. Standardizing chemical structure annotation of public toxicity databases
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine.
Elsik, Christine G; Tayal, Aditi; Diesh, Colin M; Unni, Deepak R; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E
2016-01-04
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Vaccarino, Anthony L; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M; Stuss, Donald T; Theriault, Elizabeth; Evans, Kenneth R
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.
Vaccarino, Anthony L.; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R.; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G.; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F. Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M.; Stuss, Donald T.; Theriault, Elizabeth; Evans, Kenneth R.
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute’s “Brain-CODE” is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care. PMID:29875648
Exemplar pediatric collaborative improvement networks: achieving results.
Billett, Amy L; Colletti, Richard B; Mandel, Keith E; Miller, Marlene; Muething, Stephen E; Sharek, Paul J; Lannon, Carole M
2013-06-01
A number of pediatric collaborative improvement networks have demonstrated improved care and outcomes for children. Regionally, Cincinnati Children's Hospital Medical Center Physician Hospital Organization has sustained key asthma processes, substantially increased the percentage of their asthma population receiving "perfect care," and implemented an innovative pay-for-performance program with a large commercial payor based on asthma performance measures. The California Perinatal Quality Care Collaborative uses its outcomes database to improve care for infants in California NICUs. It has achieved reductions in central line-associated blood stream infections (CLABSI), increased breast-milk feeding rates at hospital discharge, and is now working to improve delivery room management. Solutions for Patient Safety (SPS) has achieved significant improvements in adverse drug events and surgical site infections across all 8 Ohio children's hospitals, with 7700 fewer children harmed and >$11.8 million in avoided costs. SPS is now expanding nationally, aiming to eliminate all events of serious harm at children's hospitals. National collaborative networks include ImproveCareNow, which aims to improve care and outcomes for children with inflammatory bowel disease. Reliable adherence to Model Care Guidelines has produced improved remission rates without using new medications and a significant increase in the proportion of Crohn disease patients not taking prednisone. Data-driven collaboratives of the Children's Hospital Association Quality Transformation Network initially focused on CLABSI in PICUs. By September 2011, they had prevented an estimated 2964 CLABSI, saving 355 lives and $103,722,423. Subsequent improvement efforts include CLABSI reductions in additional settings and populations.
Obesity Researches Over the Past 24 years: A Scientometrics Study in Middle East Countries.
Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Moghaddam, Sahar Saeedi; Larijani, Bagher; Farzadfar, Farshad
2015-01-01
Researchers, practitioners, and policy-makers call for updated valid evidences to monitor, prevent, and control of alarming trends of obesity. We quantify the trends of obesity/overweight researches outputs of Middle East countries. We systematically searched Scopus database as the only sources for multidisciplinary citation reports, with the most coverage in health and biomedicine disciplines for all related obesity/overweight publications, from 1990 to 2013. These scientometrics analysis assessed the trends of scientific products, citations, and collaborative papers in Middle East countries. We also provided Information on top institutions, journals, and collaborative research centers in the field of obesity/overweight. Over 24-year period, the number of obesity/overweight publications and related citations in Middle East countries had increasing trend. Globally, during 1990-2013, 415,126 papers have been published, from them, 3.56% were affiliated to Middle East countries. Iran with 26.27%, compare with other countries in the regions, after Turkey (47.94%) and Israel (35.25%), had the third position. Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Despite the ascending trends in research outputs, more efforts required for promotion of collaborative partnerships. Results could be useful for better health policy and more planned studies in this field. These findings also could be used for future complementary analysis.
Britto, Jorge; Vargas, Marco Antônio; Gadelha, Carlos Augusto Grabois; Costa, Laís Silveira
2012-12-01
To examine recent developments in health-related scientific capabilities, the impact of lines of incentives on reducing regional scientific imbalances, and university-industry research collaboration in Brazil. Data were obtained from the Conselho Nacional de Desenvolvimento Científico e Tecnológico (Brazilian National Council for Scientific and Technological Development) databases for the years 2000 to 2010. There were assessed indicators of resource mobilization, research network structuring, and knowledge transfer between science and industry initiatives. Based on the regional distribution map of health-related scientific and technological capabilities there were identified patterns of scientific capabilities and science-industry collaboration. There was relative spatial deconcentration of health research groups and more than 6% of them worked in six areas of knowledge areas: medicine, collective health, dentistry, veterinary medicine, ecology and physical education. Lines of incentives that were adopted from 2000 to 2009 contributed to reducing regional scientific imbalances and improving preexisting capabilities or, alternatively, encouraging spatial decentralization of these capabilities. Health-related scientific and technological capabilities remain highly spatially concentrated in Brazil and incentive policies have contributed to reduce to some extent these imbalances.
Wirtschafter, David D; Danielsen, Beate H; Main, Elliott K; Korst, Lisa M; Gregory, Kimberly D; Wertz, Andrew; Stevenson, David K; Gould, Jeffrey B
2006-05-01
The California Perinatal Quality Care Collaborative (CPQCC) was formed to seek perinatal care improvements by creating a confidential multi-institutional database to identify topics for quality improvement (QI). We aimed to evaluate this approach by assessing antenatal steroid administration before preterm (24 to 33 weeks of gestation) delivery. We hypothesized that mean performance would improve and the number of centers performing below the lowest quartile of the baseline year would decrease. In 1998, a statewide QI cycle targeting antenatal steroid use was announced, calling for the evaluation of the 1998 baseline data, dissemination of recommended interventions using member-developed educational materials, and presentations to California neonatologists in 1999-2000. Postintervention data were assessed for the year 2001 and publicly released in 2003. A total of 25 centers voluntarily participated in the intervention. Antenatal steroid administration rate increased from 76% of 1524 infants in 1998 to 86% of 1475 infants in 2001 (P < .001). In 2001, 23 of 25 hospitals exceeded the 1998 lower-quartile cutoff point of 69.3%. Regional collaborations represent an effective strategy for improving the quality of perinatal care.
QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.
Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo
2016-01-01
At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.
Gene: a gene-centered information resource at NCBI.
Brown, Garth R; Hem, Vichet; Katz, Kenneth S; Ovetsky, Michael; Wallin, Craig; Ermolaeva, Olga; Tolstoy, Igor; Tatusova, Tatiana; Pruitt, Kim D; Maglott, Donna R; Murphy, Terence D
2015-01-01
The National Center for Biotechnology Information's (NCBI) Gene database (www.ncbi.nlm.nih.gov/gene) integrates gene-specific information from multiple data sources. NCBI Reference Sequence (RefSeq) genomes for viruses, prokaryotes and eukaryotes are the primary foundation for Gene records in that they form the critical association between sequence and a tracked gene upon which additional functional and descriptive content is anchored. Additional content is integrated based on the genomic location and RefSeq transcript and protein sequence data. The content of a Gene record represents the integration of curation and automated processing from RefSeq, collaborating model organism databases, consortia such as Gene Ontology, and other databases within NCBI. Records in Gene are assigned unique, tracked integers as identifiers. The content (citations, nomenclature, genomic location, gene products and their attributes, phenotypes, sequences, interactions, variation details, maps, expression, homologs, protein domains and external databases) is available via interactive browsing through NCBI's Entrez system, via NCBI's Entrez programming utilities (E-Utilities and Entrez Direct) and for bulk transfer by FTP. Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA Technical Reports Server (NTRS)
Bebout, Leslie; Keller, R.; Miller, S.; Jahnke, L.; DeVincenzi, D. (Technical Monitor)
2002-01-01
The Ames Exobiology Culture Collection Database (AECC-DB) has been developed as a collaboration between microbial ecologists and information technology specialists. It allows for extensive web-based archiving of information regarding field samples to document microbial co-habitation of specific ecosystem micro-environments. Documentation and archiving continues as pure cultures are isolated, metabolic properties determined, and DNA extracted and sequenced. In this way metabolic properties and molecular sequences are clearly linked back to specific isolates and the location of those microbes in the ecosystem of origin. Use of this database system presents a significant advancement over traditional bookkeeping wherein there is generally little or no information regarding the environments from which microorganisms were isolated. Generally there is only a general ecosystem designation (i.e., hot-spring). However within each of these there are a myriad of microenvironments with very different properties and determining exactly where (which microenvironment) a given microbe comes from is critical in designing appropriate isolation media and interpreting physiological properties. We are currently using the database to aid in the isolation of a large number of cyanobacterial species and will present results by PI's and students demonstrating the utility of this new approach.
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Bennett, Matthew
2006-01-01
The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.
Michigan/Air Force Research Laboratory (AFRL) Collaborative Center in Aeronautical Sciences (MACCAS)
2013-09-01
Interactions - PIV Database for the Second SBLI Workshop” “Design of a Glass Supersonic Wind Tunnel Experiment for Mixed Compression Inlet Investigations...or small-scale wind tunnel tests. Some of the discipline components have also been compared against well-established numerical solutions (e.g...difficult to test in a wind tunnel environment. The choice of construction, materials, and geometry were such that they allow accurate characterization of
Standardizing the nomenclature of Martian impact crater ejecta morphologies
Barlow, Nadine G.; Boyce, Joseph M.; Costard, Francois M.; Craddock, Robert A.; Garvin, James B.; Sakimoto, Susan E.H.; Kuzmin, Ruslan O.; Roddy, David J.; Soderblom, Laurence A.
2000-01-01
The Mars Crater Morphology Consortium recommends the use of a standardized nomenclature system when discussing Martian impact crater ejecta morphologies. The system utilizes nongenetic descriptors to identify the various ejecta morphologies seen on Mars. This system is designed to facilitate communication and collaboration between researchers. Crater morphology databases will be archived through the U.S. Geological Survey in Flagstaff, where a comprehensive catalog of Martian crater morphologic information will be maintained.
The National Center for Collaboration in Medical Modeling and Simulation
2005-05-01
universities) to determine the best development strategies . The M~dical Modeling and Simulation Database (MMSD) has been created. The MMSD consists of two web... learner to obtain experience and skill prior to interacting with patients in vivo. The increasing focus on issues of patient safety, health care costs...additional option when considering how to best to maximize their educational resources. While the results of this study suggest that VR simulators are useful
Energy star. (Latest citations from the Computer database). Published Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 81 citations and includes a subject term index and title list.)
Energy star. (Latest citations from the Computer database). Published Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 234 citations and includes a subject term index and title list.)
The Geant4 physics validation repository
Wenzel, H.; Yarba, J.; Dotti, A.
2015-12-23
The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. Lastly, the functionality of these components and the technology choices we made are also described
US Gateway to SIMBAD Astronomical Database
NASA Technical Reports Server (NTRS)
Eichhorn, G.; Oliversen, R. (Technical Monitor)
1999-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 3400 US users registered. We also provide user support by answering questions from users and handling requests for lost passwords when still necessary. We have implemented in cooperation with the CDS SIMBAD project access to the SIMBAD database for US users on an Internet address basis. This allows most US users to access SIMBAD without having to enter passwords. We have maintained the mirror copy of the SIMBAD database on a server at SAO. This has allowed much faster access for the US users. We also supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We shipped computer equipment to the meeting and provided support for the demonstration activities at the SIMBAD booth. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative called Urania. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers.
Development of the Global Earthquake Model’s neotectonic fault database
Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert
2015-01-01
The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.