Sample records for database consortium developing

  1. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  2. Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)

    NASA Astrophysics Data System (ADS)

    Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium

    2015-08-01

    The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.

  3. SU-E-P-26: Oncospace: A Shared Radiation Oncology Database System Designed for Personalized Medicine, Decision Support, and Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, M; Robertson, S; Moore, J

    Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less

  4. [Activity of NTDs Drug-discovery Research Consortium].

    PubMed

    Namatame, Ichiji

    2016-01-01

    Neglected tropical diseases (NTDs) are an extremely important issue facing global health care. To improve "access to health" where people are unable to access adequate medical care due to poverty and weak healthcare systems, we have established two consortiums: the NTD drug discovery research consortium, and the pediatric praziquantel consortium. The NTD drug discovery research consortium, which involves six institutions from industry, government, and academia, as well as an international non-profit organization, is committed to developing anti-protozoan active compounds for three NTDs (Leishmaniasis, Chagas disease, and African sleeping sickness). Each participating institute will contribute their efforts to accomplish the following: selection of drug targets based on information technology, and drug discovery by three different approaches (in silico drug discovery, "fragment evolution" which is a unique drug designing method of Astellas Pharma, and phenotypic screening with Astellas' compound library). The consortium has established a brand new database (Integrated Neglected Tropical Disease Database; iNTRODB), and has selected target proteins for the in silico and fragment evolution drug discovery approaches. Thus far, we have identified a number of promising compounds that inhibit the target protein, and we are currently trying to improve the anti-protozoan activity of these compounds. The pediatric praziquantel consortium was founded in July 2012 to develop and register a new praziquantel pediatric formulation for the treatment of schistosomiasis. Astellas Pharma has been a core member in this consortium since its establishment, and has provided expertise and technology in the area of pediatric formulation development and clinical development.

  5. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  6. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293

  7. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative.

    PubMed

    Won, Brian; Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-03-16

    An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved.  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium.

  8. Consortia for Engineering, Science and Technology Libraries in India: A Case Study of INDEST Consortium

    NASA Astrophysics Data System (ADS)

    Pathak, S. K.; Deshpande, N. J.

    2007-10-01

    The present scenario of the INDEST Consortium among engineering, science and technology (including astronomy and astrophysics) libraries in India is discussed. The Indian National Digital Library in Engineering Sciences & Technology (INDEST) Consortium is a major initiative of the Ministry of Human Resource Development, Government of India. The INDEST Consortium provides access to 16 full text e-resources and 7 bibliographic databases for 166 institutions as members who are taking advantage of cost effective access to premier resources in engineering, science and technology, including astronomy and astrophysics. Member institutions can access over 6500 e-journals from 1092 publishers. Out of these, over 150 e-journals are exclusively for the astronomy and physics community. The current study also presents a comparative analysis of the key features of nine major services, viz. ACM Digital Library, ASCE Journals, ASME Journals, EBSCO Databases (Business Source Premier), Elsevier's Science Direct, Emerald Full Text, IEEE/IEE Electronic Library Online (IEL), ProQuest ABI/INFORM and Springer Verlag's Link. In this paper, the limitations of this consortium are also discussed.

  9. External validation and comparison with other models of the International Metastatic Renal-Cell Carcinoma Database Consortium prognostic model: a population-based study

    PubMed Central

    Heng, Daniel Y C; Xie, Wanling; Regan, Meredith M; Harshman, Lauren C; Bjarnason, Georg A; Vaishampayan, Ulka N; Mackenzie, Mary; Wood, Lori; Donskov, Frede; Tan, Min-Han; Rha, Sun-Young; Agarwal, Neeraj; Kollmannsberger, Christian; Rini, Brian I; Choueiri, Toni K

    2014-01-01

    Summary Background The International Metastatic Renal-Cell Carcinoma Database Consortium model offers prognostic information for patients with metastatic renal-cell carcinoma. We tested the accuracy of the model in an external population and compared it with other prognostic models. Methods We included patients with metastatic renal-cell carcinoma who were treated with first-line VEGF-targeted treatment at 13 international cancer centres and who were registered in the Consortium’s database but had not contributed to the initial development of the Consortium Database model. The primary endpoint was overall survival. We compared the Database Consortium model with the Cleveland Clinic Foundation (CCF) model, the International Kidney Cancer Working Group (IKCWG) model, the French model, and the Memorial Sloan-Kettering Cancer Center (MSKCC) model by concordance indices and other measures of model fit. Findings Overall, 1028 patients were included in this study, of whom 849 had complete data to assess the Database Consortium model. Median overall survival was 18·8 months (95% 17·6–21·4). The predefined Database Consortium risk factors (anaemia, thrombocytosis, neutrophilia, hypercalcaemia, Karnofsky performance status <80%, and <1 year from diagnosis to treatment) were independent predictors of poor overall survival in the external validation set (hazard ratios ranged between 1·27 and 2·08, concordance index 0·71, 95% CI 0·68–0·73). When patients were segregated into three risk categories, median overall survival was 43·2 months (95% CI 31·4–50·1) in the favourable risk group (no risk factors; 157 patients), 22·5 months (18·7–25·1) in the intermediate risk group (one to two risk factors; 440 patients), and 7·8 months (6·5–9·7) in the poor risk group (three or more risk factors; 252 patients; p<0·0001; concordance index 0·664, 95% CI 0·639–0·689). 672 patients had complete data to test all five models. The concordance index of the CCF model was 0·662 (95% CI 0·636–0·687), of the French model 0·640 (0·614–0·665), of the IKCWG model 0·668 (0·645–0·692), and of the MSKCC model 0·657 (0·632–0·682). The reported versus predicted number of deaths at 2 years was most similar in the Database Consortium model compared with the other models. Interpretation The Database Consortium model is now externally validated and can be applied to stratify patients by risk in clinical trials and to counsel patients about prognosis. PMID:23312463

  10. Completion of the National Land Cover Database (NLCD) 1992-2001 Land Cover Change Retrofit Product

    EPA Science Inventory

    The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods betwe...

  11. The laboratory-clinician team: a professional call to action to improve communication and collaboration for optimal patient care in chromosomal microarray testing.

    PubMed

    Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew

    2012-10-01

    The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.

  12. Cognitive Challenges

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  13. Lungs in TSC

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  14. Eye Involvement in TSC

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  15. The Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) collaborative quality improvement initiative in percutaneous coronary interventions.

    PubMed

    Moscucci, Mauro; Share, David; Kline-Rogers, Eva; O'Donnell, Michael; Maxwell-Eward, Ann; Meengs, William L; Clark, Vivian L; Kraft, Phillip; De Franco, Anthony C; Chambers, James L; Patel, Kirit; McGinnity, John G; Eagle, Kim A

    2002-10-01

    The past decade has been characterized by increased scrutiny of outcomes of surgical and percutaneous coronary interventions (PCIs). This increased scrutiny has led to the development of regional, state, and national databases for outcome assessment and for public reporting. This report describes the initial development of a regional, collaborative, cardiovascular consortium and the progress made so far by this collaborative group. In 1997, a group of hospitals in the state Michigan agreed to create a regional collaborative consortium for the development of a quality improvement program in interventional cardiology. The project included the creation of a comprehensive database of PCIs to be used for risk assessment, feedback on absolute and risk-adjusted outcomes, and sharing of information. To date, information from nearly 20,000 PCIs have been collected. A risk prediction tool for death in the hospital and additional risk prediction tools for other outcomes have been developed from the data collected, and are currently used by the participating centers for risk assessment and for quality improvement. As the project enters into year 5, the participating centers are deeply engaged in the quality improvement phase, and expansion to a total of 17 hospitals with active PCI programs is in process. In conclusion, the Blue Cross Blue Shield of Michigan Cardiovascular Consortium is an example of a regional collaborative effort to assess and improve quality of care and outcomes that overcome the barriers of traditional market and academic competition.

  16. The Cardiac Safety Research Consortium ECG database.

    PubMed

    Kligfield, Paul; Green, Cynthia L

    2012-01-01

    The Cardiac Safety Research Consortium (CSRC) ECG database was initiated to foster research using anonymized, XML-formatted, digitized ECGs with corresponding descriptive variables from placebo- and positive-control arms of thorough QT studies submitted to the US Food and Drug Administration (FDA) by pharmaceutical sponsors. The database can be expanded to other data that are submitted directly to CSRC from other sources, and currently includes digitized ECGs from patients with genotyped varieties of congenital long-QT syndrome; this congenital long-QT database is also linked to ambulatory electrocardiograms stored in the Telemetric and Holter ECG Warehouse (THEW). Thorough QT data sets are available from CSRC for unblinded development of algorithms for analysis of repolarization and for blinded comparative testing of algorithms developed for the identification of moxifloxacin, as used as a positive control in thorough QT studies. Policies and procedures for access to these data sets are available from CSRC, which has developed tools for statistical analysis of blinded new algorithm performance. A recently approved CSRC project will create a data set for blinded analysis of automated ECG interval measurements, whose initial focus will include comparison of four of the major manufacturers of automated electrocardiographs in the United States. CSRC welcomes application for use of the ECG database for clinical investigation. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Types of Seizures Affecting Individuals with TSC

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  18. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less

  19. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  20. Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol

    USGS Publications Warehouse

    Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.

    2009-01-01

    One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.

  1. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies.

    PubMed

    Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.

  2. Assessment methodologies and statistical issues for computer-aided diagnosis of lung nodules in computed tomography: contemporary research topics relevant to the lung image database consortium.

    PubMed

    Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim

    2004-04-01

    Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.

  3. Development of a model web-based system to support a statewide quality consortium in radiation oncology.

    PubMed

    Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J

    A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  4. Call for participation in the neurogenetics consortium within the Human Variome Project.

    PubMed

    Haworth, Andrea; Bertram, Lars; Carrera, Paola; Elson, Joanna L; Braastad, Corey D; Cox, Diane W; Cruts, Marc; den Dunnen, Johann T; Farrer, Matthew J; Fink, John K; Hamed, Sherifa A; Houlden, Henry; Johnson, Dennis R; Nuytemans, Karen; Palau, Francesc; Rayan, Dipa L Raja; Robinson, Peter N; Salas, Antonio; Schüle, Birgitt; Sweeney, Mary G; Woods, Michael O; Amigo, Jorge; Cotton, Richard G H; Sobrido, Maria-Jesus

    2011-08-01

    The rate of DNA variation discovery has accelerated the need to collate, store and interpret the data in a standardised coherent way and is becoming a critical step in maximising the impact of discovery on the understanding and treatment of human disease. This particularly applies to the field of neurology as neurological function is impaired in many human disorders. Furthermore, the field of neurogenetics has been proven to show remarkably complex genotype-to-phenotype relationships. To facilitate the collection of DNA sequence variation pertaining to neurogenetic disorders, we have initiated the "Neurogenetics Consortium" under the umbrella of the Human Variome Project. The Consortium's founding group consisted of basic researchers, clinicians, informaticians and database creators. This report outlines the strategic aims established at the preliminary meetings of the Neurogenetics Consortium and calls for the involvement of the wider neurogenetic community in enabling the development of this important resource.

  5. Legal Agreements and the Governance of Research Commons: Lessons from Materials Sharing in Mouse Genomics

    PubMed Central

    Mishra, Amrita

    2014-01-01

    Abstract Omics research infrastructure such as databases and bio-repositories requires effective governance to support pre-competitive research. Governance includes the use of legal agreements, such as Material Transfer Agreements (MTAs). We analyze the use of such agreements in the mouse research commons, including by two large-scale resource development projects: the International Knockout Mouse Consortium (IKMC) and International Mouse Phenotyping Consortium (IMPC). We combine an analysis of legal agreements and semi-structured interviews with 87 members of the mouse model research community to examine legal agreements in four contexts: (1) between researchers; (2) deposit into repositories; (3) distribution by repositories; and (4) exchanges between repositories, especially those that are consortium members of the IKMC and IMPC. We conclude that legal agreements for the deposit and distribution of research reagents should be kept as simple and standard as possible, especially when minimal enforcement capacity and resources exist. Simple and standardized legal agreements reduce transactional bottlenecks and facilitate the creation of a vibrant and sustainable research commons, supported by repositories and databases. PMID:24552652

  6. Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin

    1995-04-01

    Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.

  7. Development and Feasibility Testing of a Critical Care EEG Monitoring Database for Standardized Clinical Reporting and Multicenter Collaborative Research.

    PubMed

    Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D

    2016-04-01

    The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.

  8. The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium: Reaching in Partnership for Optimal Orthopaedic Rehabilitation Outcomes.

    PubMed

    Stanhope, Steven J; Wilken, Jason M; Pruziner, Alison L; Dearth, Christopher L; Wyatt, Marilynn; Ziemke, Gregg W; Strickland, Rachel; Milbourne, Suzanne A; Kaufman, Kenton R

    2016-11-01

    The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators.BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium's research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER's Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium's vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  9. HYDROGEOLOGIC FOUNDATION IN SUPPORT OF ECOSYSTEM RESTORATION: BASE-FLOW LOADINGS OF NITRATE IN MID-ATLANTIC AGRICULTURAL WATERSHEDS

    EPA Science Inventory

    The study is a consortium between the U.S. Environmental Protection Agency (National Risk Management Research Laboratory) and the U.S. Geological Survey (Baltimore and Dover). The objectives of this study are: (1) to develop a geohydrological database for paired agricultural wate...

  10. Consortial IT Services: Collaborating To Reduce the Pain.

    ERIC Educational Resources Information Center

    Klonoski, Ed

    The Connecticut Distance Learning Consortium (CTDLC) provides its 32 members with Information Technologies (IT) services including a portal Web site, course management software, course hosting and development, faculty training, a help desk, online assessment, and a student financial aid database. These services are supplied to two- and four-year…

  11. Rationale of the FIBROTARGETS study designed to identify novel biomarkers of myocardial fibrosis

    PubMed Central

    Ferreira, João Pedro; Machu, Jean‐Loup; Girerd, Nicolas; Jaisser, Frederic; Thum, Thomas; Butler, Javed; González, Arantxa; Diez, Javier; Heymans, Stephane; McDonald, Kenneth; Gyöngyösi, Mariann; Firat, Hueseyin; Rossignol, Patrick; Pizard, Anne

    2017-01-01

    Abstract Aims Myocardial fibrosis alters the cardiac architecture favouring the development of cardiac dysfunction, including arrhythmias and heart failure. Reducing myocardial fibrosis may improve outcomes through the targeted diagnosis and treatment of emerging fibrotic pathways. The European‐Commission‐funded ‘FIBROTARGETS’ is a multinational academic and industrial consortium with the main aims of (i) characterizing novel key mechanistic pathways involved in the metabolism of fibrillary collagen that may serve as biotargets, (ii) evaluating the potential anti‐fibrotic properties of novel or repurposed molecules interfering with the newly identified biotargets, and (iii) characterizing bioprofiles based on distinct mechanistic phenotypes involving the aforementioned biotargets. These pathways will be explored by performing a systematic and collaborative search for mechanisms and targets of myocardial fibrosis. These mechanisms will then be translated into individualized diagnostic tools and specific therapeutic pharmacological options for heart failure. Methods and results The FIBROTARGETS consortium has merged data from 12 patient cohorts in a common database available to individual consortium partners. The database consists of >12 000 patients with a large spectrum of cardiovascular clinical phenotypes. It integrates community‐based population cohorts, cardiovascular risk cohorts, and heart failure cohorts. Conclusions The FIBROTARGETS biomarker programme is aimed at exploring fibrotic pathways allowing the bioprofiling of patients into specific ‘fibrotic’ phenotypes and identifying new therapeutic targets that will potentially enable the development of novel and tailored anti‐fibrotic therapies for heart failure. PMID:28988439

  12. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  13. Japan PGx Data Science Consortium Database: SNPs and HLA genotype data from 2994 Japanese healthy individuals for pharmacogenomics studies.

    PubMed

    Kamitsuji, Shigeo; Matsuda, Takashi; Nishimura, Koichi; Endo, Seiko; Wada, Chisa; Watanabe, Kenji; Hasegawa, Koichi; Hishigaki, Haretsugu; Masuda, Masatoshi; Kuwahara, Yusuke; Tsuritani, Katsuki; Sugiura, Kenkichi; Kubota, Tomoko; Miyoshi, Shinji; Okada, Kinya; Nakazono, Kazuyuki; Sugaya, Yuki; Yang, Woosung; Sawamoto, Taiji; Uchida, Wataru; Shinagawa, Akira; Fujiwara, Tsutomu; Yamada, Hisaharu; Suematsu, Koji; Tsutsui, Naohisa; Kamatani, Naoyuki; Liou, Shyh-Yuh

    2015-06-01

    Japan Pharmacogenomics Data Science Consortium (JPDSC) has assembled a database for conducting pharmacogenomics (PGx) studies in Japanese subjects. The database contains the genotypes of 2.5 million single-nucleotide polymorphisms (SNPs) and 5 human leukocyte antigen loci from 2994 Japanese healthy volunteers, as well as 121 kinds of clinical information, including self-reports, physiological data, hematological data and biochemical data. In this article, the reliability of our data was evaluated by principal component analysis (PCA) and association analysis for hematological and biochemical traits by using genome-wide SNP data. PCA of the SNPs showed that all the samples were collected from the Japanese population and that the samples were separated into two major clusters by birthplace, Okinawa and other than Okinawa, as had been previously reported. Among 87 SNPs that have been reported to be associated with 18 hematological and biochemical traits in genome-wide association studies (GWAS), the associations of 56 SNPs were replicated using our data base. Statistical power simulations showed that the sample size of the JPDSC control database is large enough to detect genetic markers having a relatively strong association even when the case sample size is small. The JPDSC database will be useful as control data for conducting PGx studies to explore genetic markers to improve the safety and efficacy of drugs either during clinical development or in post-marketing.

  14. Inroads to predict in vivo toxicology-an introduction to the eTOX Project.

    PubMed

    Briggs, Katharine; Cases, Montserrat; Heard, David J; Pastor, Manuel; Pognan, François; Sanz, Ferran; Schwab, Christof H; Steger-Hartmann, Thomas; Sutter, Andreas; Watson, David K; Wichard, Jörg D

    2012-01-01

    There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison ("read-across"), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX ("electronic toxicity") consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables.

  15. The GED4GEM project: development of a Global Exposure Database for the Global Earthquake Model initiative

    USGS Publications Warehouse

    Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.

    2012-01-01

    In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.

  16. Massage Therapy for Health Purposes

    MedlinePlus

    ... Web site: www.nih.gov/health/clinicaltrials/ Cochrane Database of Systematic Reviews The Cochrane Database of Systematic ... Licensed Complementary and Alternative Healthcare Professions. Seattle, WA: Academic Consortium for Complementary and Alternative Health Care; 2009. ...

  17. The Génolevures database.

    PubMed

    Martin, Tiphaine; Sherman, David J; Durrens, Pascal

    2011-01-01

    The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  18. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  19. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.

  20. Using a centralised database system and server in the European Union Framework Programme 7 project SEPServer

    NASA Astrophysics Data System (ADS)

    Heynderickx, Daniel

    2012-07-01

    The main objective of the SEPServer project (EU FP7 project 262773) is to produce a new tool, which greatly facilitates the investigation of solar energetic particles (SEPs) and their origin: a server providing SEP data, related electromagnetic (EM) observations and analysis methods, a comprehensive catalogue of the observed SEP events, and educational/outreach material on solar eruptions. The project is coordinated by the University of Helsinki. The project will combine data and knowledge from 11 European partners and several collaborating parties from Europe and US. The datasets provided by the consortium partners are collected in a MySQL database (using the ESA Open Data Interface under licence) on a server operated by DH Consultancy, which also hosts a web interface providing browsing, plotting and post-processing and analysis tools developed by the consortium, as well as a Solar Energetic Particle event catalogue. At this stage of the project, a prototype server has been established, which is presently undergoing testing by users inside the consortium. Using a centralized database has numerous advantages, including: homogeneous storage of the data, which eliminates the need for dataset specific file access routines once the data are ingested in the database; a homogeneous set of metadata describing the datasets on both a global and detailed level, allowing for automated access to and presentation of the various data products; standardised access to the data in different programming environments (e.g. php, IDL); elimination of the need to download data for individual data requests. SEPServer will, thus, add value to several space missions and Earth-based observations by facilitating the coordinated exploitation of and open access to SEP data and related EM observations, and promoting correct use of these data for the entire space research community. This will lead to new knowledge on the production and transport of SEPs during solar eruptions and facilitate the development of models for predicting solar radiation storms and calculation of expected fluxes/fluences of SEPs encountered by spacecraft in the interplanetary medium.

  1. The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium: Reaching in Partnership for Optimal Orthopaedic Rehabilitation Outcomes

    PubMed Central

    Stanhope, Steven J.; Wilken, Jason M.; Pruziner, Alison L.; Dearth, Christopher L.; Wyatt, Marilynn; Ziemke, CAPT Gregg W.; Strickland, Rachel; Milbourne, Suzanne A.; Kaufman, Kenton R.

    2017-01-01

    The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators. BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium’s research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER’s Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium’s vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. PMID:27849456

  2. The CTSA Consortium's Catalog of Assets for Translational and Clinical Health Research (CATCHR)

    PubMed Central

    Mapes, Brandy; Basford, Melissa; Zufelt, Anneliese; Wehbe, Firas; Harris, Paul; Alcorn, Michael; Allen, David; Arnim, Margaret; Autry, Susan; Briggs, Michael S.; Carnegie, Andrea; Chavis‐Keeling, Deborah; De La Pena, Carlos; Dworschak, Doris; Earnest, Julie; Grieb, Terri; Guess, Marilyn; Hafer, Nathaniel; Johnson, Tesheia; Kasper, Amanda; Kopp, Janice; Lockie, Timothy; Lombardo, Vincetta; McHale, Leslie; Minogue, Andrea; Nunnally, Beth; O'Quinn, Deanna; Peck, Kelly; Pemberton, Kieran; Perry, Cheryl; Petrie, Ginny; Pontello, Andria; Posner, Rachel; Rehman, Bushra; Roth, Deborah; Sacksteder, Paulette; Scahill, Samantha; Schieri, Lorri; Simpson, Rosemary; Skinner, Anne; Toussant, Kim; Turner, Alicia; Van der Put, Elaine; Wasser, June; Webb, Chris D.; Williams, Maija; Wiseman, Lori; Yasko, Laurel; Pulley, Jill

    2014-01-01

    Abstract The 61 CTSA Consortium sites are home to valuable programs and infrastructure supporting translational science and all are charged with ensuring that such investments translate quickly to improved clinical care. Catalog of Assets for Translational and Clinical Health Research (CATCHR) is the Consortium's effort to collect and make available information on programs and resources to maximize efficiency and facilitate collaborations. By capturing information on a broad range of assets supporting the entire clinical and translational research spectrum, CATCHR aims to provide the necessary infrastructure and processes to establish and maintain an open‐access, searchable database of consortium resources to support multisite clinical and translational research studies. Data are collected using rigorous, defined methods, with the resulting information made visible through an integrated, searchable Web‐based tool. Additional easy‐to‐use Web tools assist resource owners in validating and updating resource information over time. In this paper, we discuss the design and scope of the project, data collection methods, current results, and future plans for development and sustainability. With increasing pressure on research programs to avoid redundancy, CATCHR aims to make available information on programs and core facilities to maximize efficient use of resources. PMID:24456567

  3. Enhancing Transfer Effectiveness: A Model for the 1990's. First Year Report to the National Effective Transfer Consortium. Executive Summary.

    ERIC Educational Resources Information Center

    Berman, Paul; And Others

    This first-year report of the National Effective Transfer Consortium (NETC) summarizes the progress made by the member colleges in creating standardized measures of actual and expected transfer rates and of transfer effectiveness, and establishing a database that would enable valid comparisons among NETC colleges. Following background information…

  4. Photovoltaic Manufacturing Consortium (PVMC) – Enabling America’s Solar Revolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metacarpa, David

    The U.S. Photovoltaic Manufacturing Consortium (US-PVMC) is an industry-led consortium which was created with the mission to accelerate the research, development, manufacturing, field testing, commercialization, and deployment of next-generation solar photovoltaic technologies. Formed as part of the U.S. Department of Energy's (DOE) SunShot initiative, and headquartered in New York State, PVMC is managed by the State University of New York Polytechnic Institute (SUNY Poly) at the Colleges of Nanoscale Science and Engineering. PVMC is a hybrid of industry-led consortium and manufacturing development facility, with capabilities for collaborative and proprietary industry engagement. Through its technology development programs, advanced manufacturing development facilities,more » system demonstrations, and reliability and testing capabilities, PVMC has demonstrated itself to be a recognized proving ground for innovative solar technologies and system designs. PVMC comprises multiple locations, with the core manufacturing and deployment support activities conducted at the Solar Energy Development Center (SEDC), and the core Si wafering and metrology technologies being headed out of the University of Central Florida. The SEDC provides a pilot line for proof-of-concept prototyping, offering critical opportunities to demonstrate emerging concepts in PV manufacturing, such as evaluations of innovative materials, system components, and PV system designs. The facility, located in Halfmoon NY, encompasses 40,000 square feet of dedicated PV development space. The infrastructure and capabilities housed at PVMC includes PV system level testing at the Prototype Demonstration Facility (PDF), manufacturing scale cell & module fabrication at the Manufacturing Development Facility (MDF), cell and module testing, reliability equipment on its PV pilot line, all integrated with a PV performance database and analytical characterizations for PVMC and its partners test and commercial arrays. Additional development and deployment support are also housed at the SEDC, such as cost modeling and cost model based development activities for PV and thin film modules, components, and system level designs for reduced LCOE through lower installation hardware costs, labor reductions, soft costs and reduced operations and maintenance costs. The progression of the consortium activities started with infrastructure and capabilities build out focused on CIGS thin film photovoltaics, with a particular focus on flexible cell and module production. As marketplace changes and partners objectives shifted, the consortium shifted heavily towards deployment and market pull activities including Balance of System, cost modeling, and installation cost reduction efforts along with impacts to performance and DER operational costs. The consortium consisted of a wide array of PV supply chain companies from equipment and component suppliers through national developers and installers with a particular focus on commercial scale deployments (typically 25 to 2MW installations). With DOE funding ending after the fifth budget period, the advantages and disadvantages of such a consortium is detailed along with potential avenues for self-sustainability is reviewed.« less

  5. Interdisciplinary Collaboration amongst Colleagues and between Initiatives with the Magnetics Information Consortium (MagIC) Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.

    2014-12-01

    Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.

  6. CFD Aerothermodynamic Characterization Of The IXV Hypersonic Vehicle

    NASA Astrophysics Data System (ADS)

    Roncioni, P.; Ranuzzi, G.; Marini, M.; Battista, F.; Rufolo, G. C.

    2011-05-01

    In this paper, and in the framework of the ESA technical assistance activities for IXV project, the numerical activities carried out by ASI/CIRA to support the development of Aerodynamic and Aerothermodynamic databases, independent from the ones developed by the IXV Industrial consortium, are reported. A general characterization of the IXV aerothermodynamic environment has been also provided for cross checking and verification purposes. The work deals with the first year activities of Technical Assistance Contract agreed between the Italian Space Agency/CIRA and ESA.

  7. Creation of a digital slide and tissue microarray resource from a multi-institutional predictive toxicology study in the rat: an initial report from the PredTox group.

    PubMed

    Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M

    2008-08-01

    The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.

  8. A Project to Computerize Performance Objectives and Criterion-Referenced Measures in Occupational Education for Research and Determination of Applicability to Handicapped Learners. Final Report.

    ERIC Educational Resources Information Center

    Lee, Connie W.; Hinson, Tony M.

    This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…

  9. A National Study on the Effects of Concussion in Collegiate Athletes and US Military Service Academy Members: The NCAA-DoD Concussion Assessment, Research and Education (CARE) Consortium Structure and Methods.

    PubMed

    Broglio, Steven P; McCrea, Michael; McAllister, Thomas; Harezlak, Jaroslaw; Katz, Barry; Hack, Dallas; Hainline, Brian

    2017-07-01

    The natural history of mild traumatic brain injury (TBI) or concussion remains poorly defined and no objective biomarker of physiological recovery exists for clinical use. The National Collegiate Athletic Association (NCAA) and the US Department of Defense (DoD) established the Concussion Assessment, Research and Education (CARE) Consortium to study the natural history of clinical and neurobiological recovery after concussion in the service of improved injury prevention, safety and medical care for student-athletes and military personnel. The objectives of this paper were to (i) describe the background and driving rationale for the CARE Consortium; (ii) outline the infrastructure of the Consortium policies, procedures, and governance; (iii) describe the longitudinal 6-month clinical and neurobiological study methodology; and (iv) characterize special considerations in the design and implementation of a multicenter trial. Beginning Fall 2014, CARE Consortium institutions have recruited and enrolled 23,533 student-athletes and military service academy students (approximately 90% of eligible student-athletes and cadets; 64.6% male, 35.4% female). A total of 1174 concussions have been diagnosed in participating subjects, with both concussion and baseline cases deposited in the Federal Interagency Traumatic Brain Injury Research (FITBIR) database. Challenges have included coordinating regulatory issues across civilian and military institutions, operationalizing study procedures, neuroimaging protocol harmonization across sites and platforms, construction and maintenance of a relational database, and data quality and integrity monitoring. The NCAA-DoD CARE Consortium represents a comprehensive investigation of concussion in student-athletes and military service academy students. The richly characterized study sample and multidimensional approach provide an opportunity to advance the field of concussion science, not only among student athletes but in all populations at risk for mild TBI.

  10. The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases

    PubMed Central

    Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning

    2014-01-01

    IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451

  11. Completion of the 2006 National Land Cover Database Update for the Conterminous United States

    EPA Science Inventory

    Under the organization of the Multi-Resolution Land Characteristics (MRLC) Consortium, the National Land Cover Database (NLCD) has been updated to characterize both land cover and land cover change from 2001 to 2006. An updated version of NLCD 2001 (Version 2.0) is also provided....

  12. Remote sensing and GIS technology in the Global Land Ice Measurements from Space (GLIMS) Project

    USGS Publications Warehouse

    Raup, B.; Kääb, Andreas; Kargel, J.S.; Bishop, M.P.; Hamilton, G.; Lee, E.; Paul, F.; Rau, F.; Soltesz, D.; Khalsa, S.J.S.; Beedle, M.; Helm, C.

    2007-01-01

    Global Land Ice Measurements from Space (GLIMS) is an international consortium established to acquire satellite images of the world's glaciers, analyze them for glacier extent and changes, and to assess these change data in terms of forcings. The consortium is organized into a system of Regional Centers, each of which is responsible for glaciers in their region of expertise. Specialized needs for mapping glaciers in a distributed analysis environment require considerable work developing software tools: terrain classification emphasizing snow, ice, water, and admixtures of ice with rock debris; change detection and analysis; visualization of images and derived data; interpretation and archival of derived data; and analysis to ensure consistency of results from different Regional Centers. A global glacier database has been designed and implemented at the National Snow and Ice Data Center (Boulder, CO); parameters have been expanded from those of the World Glacier Inventory (WGI), and the database has been structured to be compatible with (and to incorporate) WGI data. The project as a whole was originated, and has been coordinated by, the US Geological Survey (Flagstaff, AZ), which has also led the development of an interactive tool for automated analysis and manual editing of glacier images and derived data (GLIMSView). This article addresses remote sensing and Geographic Information Science techniques developed within the framework of GLIMS in order to fulfill the goals of this distributed project. Sample applications illustrating the developed techniques are also shown. ?? 2006 Elsevier Ltd. All rights reserved.

  13. The National Land Cover Database

    USGS Publications Warehouse

    Homer, Collin G.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  14. Genetic variants of the DNA repair genes from Exome Aggregation Consortium (EXAC) database: significance in cancer.

    PubMed

    Das, Raima; Ghosh, Sankar Kumar

    2017-04-01

    DNA repair pathway is a primary defense system that eliminates wide varieties of DNA damage. Any deficiencies in them are likely to cause the chromosomal instability that leads to cell malfunctioning and tumorigenesis. Genetic polymorphisms in DNA repair genes have demonstrated a significant association with cancer risk. Our study attempts to give a glimpse of the overall scenario of the germline polymorphisms in the DNA repair genes by taking into account of the Exome Aggregation Consortium (ExAC) database as well as the Human Gene Mutation Database (HGMD) for evaluating the disease link, particularly in cancer. It has been found that ExAC DNA repair dataset (which consists of 228 DNA repair genes) comprises 30.4% missense, 12.5% dbSNP reported and 3.2% ClinVar significant variants. 27% of all the missense variants has the deleterious SIFT score of 0.00 and 6% variants carrying the most damaging Polyphen-2 score of 1.00, thus affecting the protein structure and function. However, as per HGMD, only a fraction (1.2%) of ExAC DNA repair variants was found to be cancer-related, indicating remaining variants reported in both the databases to be further analyzed. This, in turn, may provide an increased spectrum of the reported cancer linked variants in the DNA repair genes present in ExAC database. Moreover, further in silico functional assay of the identified vital cancer-associated variants, which is essential to get their actual biological significance, may shed some lights in the field of targeted drug development in near future. Copyright © 2017. Published by Elsevier B.V.

  15. LungMAP: The Molecular Atlas of Lung Development Program

    PubMed Central

    Ardini-Poleske, Maryanne E.; Ansong, Charles; Carson, James P.; Corley, Richard A.; Deutsch, Gail H.; Hagood, James S.; Kaminski, Naftali; Mariani, Thomas J.; Potter, Steven S.; Pryhuber, Gloria S.; Warburton, David; Whitsett, Jeffrey A.; Palmer, Scott M.; Ambalavanan, Namasivayam

    2017-01-01

    The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. PMID:28798251

  16. Making proteomics data accessible and reusable: Current state of proteomics databases and repositories

    PubMed Central

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685

  17. Description and analysis of genetic variants in French hereditary breast and ovarian cancer families recorded in the UMD-BRCA1/BRCA2 databases.

    PubMed

    Caputo, Sandrine; Benboudjema, Louisa; Sinilnikova, Olga; Rouleau, Etienne; Béroud, Christophe; Lidereau, Rosette

    2012-01-01

    BRCA1 and BRCA2 are the two main genes responsible for predisposition to breast and ovarian cancers, as a result of protein-inactivating monoallelic mutations. It remains to be established whether many of the variants identified in these two genes, so-called unclassified/unknown variants (UVs), contribute to the disease phenotype or are simply neutral variants (or polymorphisms). Given the clinical importance of establishing their status, a nationwide effort to annotate these UVs was launched by laboratories belonging to the French GGC consortium (Groupe Génétique et Cancer), leading to the creation of the UMD-BRCA1/BRCA2 databases (http://www.umd.be/BRCA1/ and http://www.umd.be/BRCA2/). These databases have been endorsed by the French National Cancer Institute (INCa) and are designed to collect all variants detected in France, whether causal, neutral or UV. They differ from other BRCA databases in that they contain co-occurrence data for all variants. Using these data, the GGC French consortium has been able to classify certain UVs also contained in other databases. In this article, we report some novel UVs not contained in the BIC database and explore their impact in cancer predisposition based on a structural approach.

  18. Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database

    NASA Astrophysics Data System (ADS)

    Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2015-12-01

    The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.

  19. Clinical Sequencing Exploratory Research Consortium: Accelerating Evidence-Based Practice of Genomic Medicine.

    PubMed

    Green, Robert C; Goddard, Katrina A B; Jarvik, Gail P; Amendola, Laura M; Appelbaum, Paul S; Berg, Jonathan S; Bernhardt, Barbara A; Biesecker, Leslie G; Biswas, Sawona; Blout, Carrie L; Bowling, Kevin M; Brothers, Kyle B; Burke, Wylie; Caga-Anan, Charlisse F; Chinnaiyan, Arul M; Chung, Wendy K; Clayton, Ellen W; Cooper, Gregory M; East, Kelly; Evans, James P; Fullerton, Stephanie M; Garraway, Levi A; Garrett, Jeremy R; Gray, Stacy W; Henderson, Gail E; Hindorff, Lucia A; Holm, Ingrid A; Lewis, Michelle Huckaby; Hutter, Carolyn M; Janne, Pasi A; Joffe, Steven; Kaufman, David; Knoppers, Bartha M; Koenig, Barbara A; Krantz, Ian D; Manolio, Teri A; McCullough, Laurence; McEwen, Jean; McGuire, Amy; Muzny, Donna; Myers, Richard M; Nickerson, Deborah A; Ou, Jeffrey; Parsons, Donald W; Petersen, Gloria M; Plon, Sharon E; Rehm, Heidi L; Roberts, J Scott; Robinson, Dan; Salama, Joseph S; Scollon, Sarah; Sharp, Richard R; Shirts, Brian; Spinner, Nancy B; Tabor, Holly K; Tarczy-Hornoch, Peter; Veenstra, David L; Wagle, Nikhil; Weck, Karen; Wilfond, Benjamin S; Wilhelmsen, Kirk; Wolf, Susan M; Wynn, Julia; Yu, Joon-Ho

    2016-06-02

    Despite rapid technical progress and demonstrable effectiveness for some types of diagnosis and therapy, much remains to be learned about clinical genome and exome sequencing (CGES) and its role within the practice of medicine. The Clinical Sequencing Exploratory Research (CSER) consortium includes 18 extramural research projects, one National Human Genome Research Institute (NHGRI) intramural project, and a coordinating center funded by the NHGRI and National Cancer Institute. The consortium is exploring analytic and clinical validity and utility, as well as the ethical, legal, and social implications of sequencing via multidisciplinary approaches; it has thus far recruited 5,577 participants across a spectrum of symptomatic and healthy children and adults by utilizing both germline and cancer sequencing. The CSER consortium is analyzing data and creating publically available procedures and tools related to participant preferences and consent, variant classification, disclosure and management of primary and secondary findings, health outcomes, and integration with electronic health records. Future research directions will refine measures of clinical utility of CGES in both germline and somatic testing, evaluate the use of CGES for screening in healthy individuals, explore the penetrance of pathogenic variants through extensive phenotyping, reduce discordances in public databases of genes and variants, examine social and ethnic disparities in the provision of genomics services, explore regulatory issues, and estimate the value and downstream costs of sequencing. The CSER consortium has established a shared community of research sites by using diverse approaches to pursue the evidence-based development of best practices in genomic medicine. Copyright © 2016 American Society of Human Genetics. All rights reserved.

  20. Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits

    PubMed Central

    Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.

    2013-01-01

    Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814

  1. Infrastructure resources for clinical research in amyotrophic lateral sclerosis.

    PubMed

    Sherman, Alexander V; Gubitz, Amelie K; Al-Chalabi, Ammar; Bedlack, Richard; Berry, James; Conwit, Robin; Harris, Brent T; Horton, D Kevin; Kaufmann, Petra; Leitner, Melanie L; Miller, Robert; Shefner, Jeremy; Vonsattel, Jean Paul; Mitsumoto, Hiroshi

    2013-05-01

    Clinical trial networks, shared clinical databases, and human biospecimen repositories are examples of infrastructure resources aimed at enhancing and expediting clinical and/or patient oriented research to uncover the etiology and pathogenesis of amyotrophic lateral sclerosis (ALS), a rapidly progressive neurodegenerative disease that leads to the paralysis of voluntary muscles. The current status of such infrastructure resources, as well as opportunities and impediments, were discussed at the second Tarrytown ALS meeting held in September 2011. The discussion focused on resources developed and maintained by ALS clinics and centers in North America and Europe, various clinical trial networks, U.S. government federal agencies including the National Institutes of Health (NIH), the Agency for Toxic Substances and Disease Registry (ATSDR) and the Centers for Disease Control and Prevention (CDC), and several voluntary disease organizations that support ALS research activities. Key recommendations included 1) the establishment of shared databases among individual ALS clinics to enhance the coordination of resources and data analyses; 2) the expansion of quality-controlled human biospecimen banks; and 3) the adoption of uniform data standards, such as the recently developed Common Data Elements (CDEs) for ALS clinical research. The value of clinical trial networks such as the Northeast ALS (NEALS) Consortium and the Western ALS (WALS) Consortium was recognized, and strategies to further enhance and complement these networks and their research resources were discussed.

  2. Scientific Use Cases for the Virtual Atomic and Molecular Data Center

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.

    2014-12-01

    VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.

  3. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  4. Building An Integrated Neurodegenerative Disease Database At An Academic Health Center

    PubMed Central

    Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.

    2010-01-01

    Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346

  5. Modernizing the MagIC Paleomagnetic and Rock Magnetic Database Technology Stack to Encourage Code Reuse and Reproducible Science

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2016-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.

  6. Genome databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less

  7. LungMAP: The Molecular Atlas of Lung Development Program.

    PubMed

    Ardini-Poleske, Maryanne E; Clark, Robert F; Ansong, Charles; Carson, James P; Corley, Richard A; Deutsch, Gail H; Hagood, James S; Kaminski, Naftali; Mariani, Thomas J; Potter, Steven S; Pryhuber, Gloria S; Warburton, David; Whitsett, Jeffrey A; Palmer, Scott M; Ambalavanan, Namasivayam

    2017-11-01

    The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. Copyright © 2017 the American Physiological Society.

  8. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  9. Understanding Differences in Administrative and Audited Patient Data in Cardiac Surgery: Comparison of the University HealthSystem Consortium and Society of Thoracic Surgeons Databases.

    PubMed

    Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V

    2016-10-01

    The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Northeast Artificial Intelligence Consortium Annual Report - 1988. Volume 12. Computer Architectures for Very Large Knowledge Bases

    DTIC Science & Technology

    1989-10-01

    Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to

  11. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    PubMed

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  12. Investigating the Potential Impacts of Energy Production in the Marcellus Shale Region Using the Shale Network Database and CUAHSI-Supported Data Tools

    NASA Astrophysics Data System (ADS)

    Brazil, L.

    2017-12-01

    The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through open source tools that are developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through collection efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, CUAHSI-supported data tools have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing educational material, and the resources available to learn more.

  13. FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome

    PubMed Central

    Sherman, Stephanie L.; Kidd, Sharon A.; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F.; Miller, Robert M.; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E.; Brown, W. Ted

    2017-01-01

    BACKGROUND AND OBJECTIVE Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. METHODS FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. RESULTS The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. CONCLUSIONS FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-today lives of individuals living with FXS and their families, and short-term and long-term outcomes. PMID:28814539

  14. FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome.

    PubMed

    Sherman, Stephanie L; Kidd, Sharon A; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F; Miller, Robert M; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E; Brown, W Ted

    2017-06-01

    Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-to-day lives of individuals living with FXS and their families, and short-term and long-term outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  15. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    PubMed

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Completion of the 2011 National Land Cover Database for the conterminous United States – Representing a decade of land cover change information

    USGS Publications Warehouse

    Homer, Collin G.; Dewitz, Jon; Yang, Limin; Jin, Suming; Danielson, Patrick; Xian, George Z.; Coulston, John; Herold, Nathaniel; Wickham, James; Megown, Kevin

    2015-01-01

    The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associated changes. The recent release of NLCD 2011 products now represents a decade of consistently produced land cover and impervious surface for the Nation across three periods: 2001, 2006, and 2011 (Homer et al., 2007; Fry et al., 2011). Tree canopy cover has also been produced for 2011 (Coluston et al., 2012; Coluston et al., 2013). With the release of NLCD 2011, the database provides the ability to move beyond simple change detection to monitoring and trend assessments. NLCD 2011 represents the latest evolution of NLCD products, continuing its focus on consistency, production, efficiency, and product accuracy. NLCD products are designed for widespread application in biology, climate, education, land management, hydrology, environmental planning, risk and disease analysis, telecommunications and visualization, and are available for no cost at http://www.mrlc.gov. NLCD is produced by a Federal agency consortium called the Multi-Resolution Land Characteristics Consortium (MRLC) (Wickham et al., 2014). In the consortium arrangement, the U.S. Geological Survey (USGS) leads NLCD land cover and imperviousness production for the bulk of the Nation; the National Oceanic and Atmospheric Administration (NOAA) completes NLCD land cover for the conterminous U.S. (CONUS) coastal zones; and the U.S. Forest Service (USFS) designs and produces the NLCD tree canopy cover product. Other MRLC partners collaborate through resource or data contribution to ensure NLCD products meet their respective program needs (Wickham et al., 2014).

  17. Microsoft Enterprise Consortium: A Resource for Teaching Data Warehouse, Business Intelligence and Database Management Systems

    ERIC Educational Resources Information Center

    Kreie, Jennifer; Hashemi, Shohreh

    2012-01-01

    Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…

  18. A 30-meter spatial database for the nation's forests

    Treesearch

    Raymond L. Czaplewski

    2002-01-01

    The FIA vision for remote sensing originated in 1992 with the Blue Ribbon Panel on FIA, and it has since evolved into an ambitious performance target for 2003. FIA is joining a consortium of Federal agencies to map the Nation's land cover. FIA field data will help produce a seamless, standardized, national geospatial database for forests at the scale of 30-m...

  19. Building an integrated neurodegenerative disease database at an academic health center.

    PubMed

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  20. National Maternal and Child Oral Health Resource Center

    MedlinePlus

    ... the Organizations Database Center for Oral Health Systems Integration and Improvement (COHSII) COHSII is a consortium promoting ... to e-mail lists Featured Resources Consensus Statement Integration Framework Bright Futures Pocket Guide Consumer Materials Special ...

  1. Construction of 3-D Earth Models for Station Specific Path Corrections by Dynamic Ray Tracing

    DTIC Science & Technology

    2001-10-01

    the numerical eikonal solution method of Vidale (1988) being used by the MIT led consortium. The model construction described in this report relies...assembled. REFERENCES Barazangi, M., Fielding, E., Isacks, B. & Seber, D., (1996), Geophysical And Geological Databases And Ctbt...preprint download6). Fielding, E., Isacks, B.L., and Baragangi. M. (1992), A Network Accessible Geological and Geophysical Database for

  2. Developing knowledge resources to support precision medicine: principles from the Clinical Pharmacogenetics Implementation Consortium (CPIC).

    PubMed

    Hoffman, James M; Dunnenberger, Henry M; Kevin Hicks, J; Caudle, Kelly E; Whirl Carrillo, Michelle; Freimuth, Robert R; Williams, Marc S; Klein, Teri E; Peterson, Josh F

    2016-07-01

    To move beyond a select few genes/drugs, the successful adoption of pharmacogenomics into routine clinical care requires a curated and machine-readable database of pharmacogenomic knowledge suitable for use in an electronic health record (EHR) with clinical decision support (CDS). Recognizing that EHR vendors do not yet provide a standard set of CDS functions for pharmacogenetics, the Clinical Pharmacogenetics Implementation Consortium (CPIC) Informatics Working Group is developing and systematically incorporating a set of EHR-agnostic implementation resources into all CPIC guidelines. These resources illustrate how to integrate pharmacogenomic test results in clinical information systems with CDS to facilitate the use of patient genomic data at the point of care. Based on our collective experience creating existing CPIC resources and implementing pharmacogenomics at our practice sites, we outline principles to define the key features of future knowledge bases and discuss the importance of these knowledge resources for pharmacogenomics and ultimately precision medicine. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. GlycomeDB – integration of open-access carbohydrate structure databases

    PubMed Central

    Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm

    2008-01-01

    Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830

  4. Improvements to the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Tauxe, L.; Koppers, A. A. P.; Constable, C.; Jonestrask, L.

    2015-12-01

    The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of data uploading and editing, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Online data editing is now available and the need for proprietary spreadsheet software is therefore entirely negated. The data owner can change values in the database or delete entries through an HTML 5 web interface that resembles typical spreadsheets in behavior and uses. Additive uploading now allows for additions to data sets to be uploaded with a simple drag and drop interface. Searching the database has improved with the addition of more sophisticated search parameters and with the facility to use them in complex combinations. A comprehensive summary view of a search result has been added for increased quick data comprehension while a raw data view is available if one desires to see all data columns as stored in the database. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. MagIC data associated with individual contributions or from online searches may be downloaded in the tab delimited MagIC text file format for susbsequent offline use and analysis. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.

  5. A New Interface for the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Koppers, A. A. P.; Tauxe, L.; Constable, C.; Shaar, R.; Jonestrask, L.

    2014-12-01

    The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of uploading data, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Data uploading has been simplified and no longer requires the use of the Excel SmartBook interface. Instead, properly formatted MagIC text files can be dragged-and-dropped onto an HTML 5 web interface. Data can be uploaded one table at a time to facilitate ease of uploading and data error checking is done online on the whole dataset at once instead of incrementally in an Excel Console. Searching the database has improved with the addition of more sophisticated search parameters and with the ability to use them in complex combinations. Searches may also be saved as permanent URLs for easy reference or for use as a citation in a publication. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. Data from the MagIC database may be downloaded from individual contributions or from online searches for offline use and analysis in the tab delimited MagIC text file format. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.

  6. 24 CFR 943.124 - What elements must a consortium agreement contain?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development Regulations Relating to Housing and... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...

  7. 24 CFR 943.124 - What elements must a consortium agreement contain?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...

  8. 24 CFR 943.124 - What elements must a consortium agreement contain?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...

  9. 24 CFR 943.124 - What elements must a consortium agreement contain?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...

  10. 24 CFR 943.124 - What elements must a consortium agreement contain?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...

  11. Establishment of a Multi-State Experiential Pharmacy Program Consortium

    PubMed Central

    Unterwagner, Whitney L.; Byrd, Debbie C.

    2008-01-01

    In 2002, a regional consortium was created for schools and colleges of pharmacy in Georgia and Alabama to assist experiential education faculty and staff members in streamlining administrative processes, providing required preceptor development, establishing a professional network, and conducting scholarly endeavors. Five schools and colleges of pharmacy with many shared experiential practice sites formed a consortium to help experiential faculty and staff members identify, discuss, and solve common experience program issues and challenges. During its 5 years in existence, the Southeastern Pharmacy Experiential Education Consortium has coordinated experiential schedules, developed and implemented uniform evaluation tools, coordinated site and preceptor development activities, established a work group for educational research and scholarship, and provided opportunities for networking and professional development. Several consortium members have received national recognition for their individual experiential education accomplishments. Through the activities of a regional consortium, members have successfully developed programs and initiatives that have streamlined administrative processes and have the potential to improve overall quality of experiential education programs. Professionally, consortium activities have resulted in 5 national presentations. PMID:18698386

  12. Forest service contributions to the national land cover database (NLCD): Tree Canopy Cover Production

    Treesearch

    Bonnie Ruefenacht; Robert Benton; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley

    2015-01-01

    A tree canopy cover (TCC) layer is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...

  13. RNAcentral: an international database of ncRNA sequences

    DOE PAGES

    Williams, Kelly Porter

    2014-10-28

    The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.

  14. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*.

    PubMed

    Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa

    2010-08-21

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  15. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    PubMed Central

    2010-01-01

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200

  16. Development of a 2001 National Land Cover Database for the United States

    USGS Publications Warehouse

    Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael

    2004-01-01

    Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.

  17. Making the MagIC (Magnetics Information Consortium) Web Application Accessible to New Users and Useful to Experts

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.

    2017-12-01

    Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.

  18. Drug development and nonclinical to clinical translational databases: past and current efforts.

    PubMed

    Monticello, Thomas M

    2015-01-01

    The International Consortium for Innovation and Quality (IQ) in Pharmaceutical Development is a science-focused organization of pharmaceutical and biotechnology companies. The mission of the Preclinical Safety Leadership Group (DruSafe) of the IQ is to advance science-based standards for nonclinical development of pharmaceutical products and to promote high-quality and effective nonclinical safety testing that can enable human risk assessment. DruSafe is creating an industry-wide database to determine the accuracy with which the interpretation of nonclinical safety assessments in animal models correctly predicts human risk in the early clinical development of biopharmaceuticals. This initiative aligns with the 2011 Food and Drug Administration strategic plan to advance regulatory science and modernize toxicology to enhance product safety. Although similar in concept to the initial industry-wide concordance data set conducted by International Life Sciences Institute's Health and Environmental Sciences Institute (HESI/ILSI), the DruSafe database will proactively track concordance, include exposure data and large and small molecules, and will continue to expand with longer duration nonclinical and clinical study comparisons. The output from this work will help identify actual human and animal adverse event data to define both the reliability and the potential limitations of nonclinical data and testing paradigms in predicting human safety in phase 1 clinical trials. © 2014 by The Author(s).

  19. Targets of Opportunity: Strategies for Managing a Staff Development Consortium.

    ERIC Educational Resources Information Center

    Parsons, Michael H.

    The Appalachian Staff Development Consortium, comprised of three community colleges and the state college located in Appalachian Maryland, attempts to integrate staff development activities into the operational framework of the sponsoring agencies. The consortium, which is managed by a steering committee composed of one teaching faculty member and…

  20. "Watching the Detectives" report of the general assembly of the EU project DETECTIVE Brussels, 24-25 November 2015.

    PubMed

    Fernando, Ruani N; Chaudhari, Umesh; Escher, Sylvia E; Hengstler, Jan G; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C; Kleinjans, Jos C S; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G J; van de Water, Bob; Vivier, Manon; Zahedi, René P; Vinken, Mathieu; Rogiers, Vera

    2016-06-01

    SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro- and in silico-based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and "-omics" technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitute a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich "-omics" databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project.

  1. “Watching the Detectives” Report of the general assembly of the EU project DETECTIVE Brussels, 24-25 November, 2015

    PubMed Central

    Fernando, Ruani N.; Chaudhari, Umesh; Escher, Sylvia E.; Hengstler, Jan G.; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C.; Kleinjans, Jos C. S.; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M.; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G.J.; van de Water, Bob; Vivier, Manon; Zahedi, René P.

    2017-01-01

    SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro and in silico based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and “-omics” technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitutes a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich “-omics” databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project. PMID:27129694

  2. OPAC Missing Record Retrieval.

    ERIC Educational Resources Information Center

    Johnson, Karl E.

    1996-01-01

    When the Higher Education Library Information Network of Rhode Island transferred members' bibliographic data into a shared online public access catalog (OPAC), 10% of the University of Rhode Island's monograph records were missing. This article describes the consortium's attempts to retrieve records from the database and the effectiveness of…

  3. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    USGS Publications Warehouse

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  4. The minimum information about a genome sequence (MIGS) specification

    PubMed Central

    Field, Dawn; Garrity, George; Gray, Tanya; Morrison, Norman; Selengut, Jeremy; Sterk, Peter; Tatusova, Tatiana; Thomson, Nicholas; Allen, Michael J; Angiuoli, Samuel V; Ashburner, Michael; Axelrod, Nelson; Baldauf, Sandra; Ballard, Stuart; Boore, Jeffrey; Cochrane, Guy; Cole, James; Dawyndt, Peter; De Vos, Paul; dePamphilis, Claude; Edwards, Robert; Faruque, Nadeem; Feldman, Robert; Gilbert, Jack; Gilna, Paul; Glöckner, Frank Oliver; Goldstein, Philip; Guralnick, Robert; Haft, Dan; Hancock, David; Hermjakob, Henning; Hertz-Fowler, Christiane; Hugenholtz, Phil; Joint, Ian; Kagan, Leonid; Kane, Matthew; Kennedy, Jessie; Kowalchuk, George; Kottmann, Renzo; Kolker, Eugene; Kravitz, Saul; Kyrpides, Nikos; Leebens-Mack, Jim; Lewis, Suzanna E; Li, Kelvin; Lister, Allyson L; Lord, Phillip; Maltsev, Natalia; Markowitz, Victor; Martiny, Jennifer; Methe, Barbara; Mizrachi, Ilene; Moxon, Richard; Nelson, Karen; Parkhill, Julian; Proctor, Lita; White, Owen; Sansone, Susanna-Assunta; Spiers, Andrew; Stevens, Robert; Swift, Paul; Taylor, Chris; Tateno, Yoshio; Tett, Adrian; Turner, Sarah; Ussery, David; Vaughan, Bob; Ward, Naomi; Whetzel, Trish; Gil, Ingio San; Wilson, Gareth; Wipat, Anil

    2008-01-01

    With the quantity of genomic data increasing at an exponential rate, it is imperative that these data be captured electronically, in a standard format. Standardization activities must proceed within the auspices of open-access and international working bodies. To tackle the issues surrounding the development of better descriptions of genomic investigations, we have formed the Genomic Standards Consortium (GSC). Here, we introduce the minimum information about a genome sequence (MIGS) specification with the intent of promoting participation in its development and discussing the resources that will be required to develop improved mechanisms of metadata capture and exchange. As part of its wider goals, the GSC also supports improving the ‘transparency’ of the information contained in existing genomic databases. PMID:18464787

  5. Metabolomics as a Hypothesis-Generating Functional Genomics Tool for the Annotation of Arabidopsis thaliana Genes of “Unknown Function”

    PubMed Central

    Quanbeck, Stephanie M.; Brachova, Libuse; Campbell, Alexis A.; Guan, Xin; Perera, Ann; He, Kun; Rhee, Seung Y.; Bais, Preeti; Dickerson, Julie A.; Dixon, Philip; Wohlgemuth, Gert; Fiehn, Oliver; Barkan, Lenore; Lange, Iris; Lange, B. Markus; Lee, Insuk; Cortes, Diego; Salazar, Carolina; Shuman, Joel; Shulaev, Vladimir; Huhman, David V.; Sumner, Lloyd W.; Roth, Mary R.; Welti, Ruth; Ilarslan, Hilal; Wurtele, Eve S.; Nikolau, Basil J.

    2012-01-01

    Metabolomics is the methodology that identifies and measures global pools of small molecules (of less than about 1,000 Da) of a biological sample, which are collectively called the metabolome. Metabolomics can therefore reveal the metabolic outcome of a genetic or environmental perturbation of a metabolic regulatory network, and thus provide insights into the structure and regulation of that network. Because of the chemical complexity of the metabolome and limitations associated with individual analytical platforms for determining the metabolome, it is currently difficult to capture the complete metabolome of an organism or tissue, which is in contrast to genomics and transcriptomics. This paper describes the analysis of Arabidopsis metabolomics data sets acquired by a consortium that includes five analytical laboratories, bioinformaticists, and biostatisticians, which aims to develop and validate metabolomics as a hypothesis-generating functional genomics tool. The consortium is determining the metabolomes of Arabidopsis T-DNA mutant stocks, grown in standardized controlled environment optimized to minimize environmental impacts on the metabolomes. Metabolomics data were generated with seven analytical platforms, and the combined data is being provided to the research community to formulate initial hypotheses about genes of unknown function (GUFs). A public database (www.PlantMetabolomics.org) has been developed to provide the scientific community with access to the data along with tools to allow for its interactive analysis. Exemplary datasets are discussed to validate the approach, which illustrate how initial hypotheses can be generated from the consortium-produced metabolomics data, integrated with prior knowledge to provide a testable hypothesis concerning the functionality of GUFs. PMID:22645570

  6. The West Virginia Consortium for Faculty and Course Development in International Studies.

    ERIC Educational Resources Information Center

    Peterson, Sophia; Maxwell, John

    The West Virginia Consortium for Faculty and Course Development in International Studies (FACDIS) is described in this report. FACDIS, a consortium of 21 West Virginia institutions of higher education, assists in international studies course development, revision, and enrichment. It also helps faculty remain current in their fields and in new…

  7. A novel cross-disciplinary multi-institute approach to translational cancer research: lessons learned from Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC).

    PubMed

    Patel, Ashokkumar A; Gilbertson, John R; Showe, Louise C; London, Jack W; Ross, Eric; Ochs, Michael F; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V; Dhir, Rajiv; Beck, J Robert; Liebman, Michael; Garcia, Fernando U; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B; Becich, Michael J

    2007-06-08

    The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium's intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions' IRB/HIPAA standards. Currently, this "virtual biorepository" has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium's web site. The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.

  8. Fuzzy Clustering Applied to ROI Detection in Helical Thoracic CT Scans with a New Proposal and Variants

    PubMed Central

    Castro, Alfonso; Boveda, Carmen; Arcay, Bernardino; Sanjurjo, Pedro

    2016-01-01

    The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium). PMID:27517049

  9. The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth".

    PubMed

    Armato, Samuel G; Roberts, Rachael Y; McNitt-Gray, Michael F; Meyer, Charles R; Reeves, Anthony P; McLennan, Geoffrey; Engelmann, Roger M; Bland, Peyton H; Aberle, Denise R; Kazerooni, Ella A; MacMahon, Heber; van Beek, Edwin J R; Yankelevitz, David; Croft, Barbara Y; Clarke, Laurence P

    2007-12-01

    Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish "truth" for algorithm development, training, and testing. The integrity of this "truth," however, must be established before investigators commit to this "gold standard" as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the "truth" collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the "blinded read phase"), radiologists independently identified and annotated lesions, assigning each to one of three categories: "nodule >or=3 mm," "nodule <3 mm," or "non-nodule >or=3 mm." For the second read (the "unblinded read phase"), the same radiologists independently evaluated the same CT scans, but with all of the annotations from the previously performed blinded reads presented; each radiologist could add to, edit, or delete their own marks; change the lesion category of their own marks; or leave their marks unchanged. The post-unblinded read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of identification of potential errors introduced during the complete image annotation process and correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. The establishment of "truth" must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems.

  10. The Magnetics Information Consortium (MagIC)

    NASA Astrophysics Data System (ADS)

    Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.

    2003-12-01

    The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.

  11. Comprehensive analysis of the N-glycan biosynthetic pathway using bioinformatics to generate UniCorn: A theoretical N-glycan structure database.

    PubMed

    Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P

    2016-08-05

    Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues

    NASA Astrophysics Data System (ADS)

    Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen

    2014-06-01

    Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.

  13. Increasing Sales by Developing Production Consortiums.

    ERIC Educational Resources Information Center

    Smith, Christopher A.; Russo, Robert

    Intended to help rehabilitation facility administrators increase organizational income from manufacturing and/or contracted service sources, this document provides a decision-making model for the development of a production consortium. The document consists of five chapters and two appendices. Chapter 1 defines the consortium concept, explains…

  14. Distributed Access View Integrated Database (DAVID) system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  15. The Child Development Training Consortium. A Status Report on the San Juan College AACJC-Kellogg Beacon College Project.

    ERIC Educational Resources Information Center

    Beers, C. David; Ott, Richard W.

    The Child Development Training Consortium, a Beacon College Project directed by San Juan College (SJC) is a collaborative effort of colleges and universities in New Mexico and Arizona. The consortium's major objective is to create child development training materials for community college faculty who teach "at-risk" Native American and…

  16. Enhancing Transfer Effectiveness: A Model for the 1990s.

    ERIC Educational Resources Information Center

    Berman, Paul; And Others

    In an effort to identify effective transfer practices appropriate to different community college circumstances, and to establish a quantitative database that would enable valid comparisons of transfer between their 28 member institutions, the National Effective Transfer Consortium (NETC) sponsored a survey of more than 30,000 students attending…

  17. The ICPSR and Social Science Research

    ERIC Educational Resources Information Center

    Johnson, Wendell G.

    2008-01-01

    The Inter-university Consortium for Political and Social Research (ICPSR), a unit within the Institute for Social Research at the University of Michigan, is the world's largest social science data archive. The data sets in the ICPRS database give the social sciences librarian/subject specialist an opportunity of providing value-added bibliographic…

  18. The CNES Gaia Data Processing Center: A Challenge and its Solutions

    NASA Astrophysics Data System (ADS)

    Chaoul, Laurence; Valette, Veronique

    2011-08-01

    After a brief reminder of the ESA Gaia project, this paper presents the data processing consortium (DPAC) and then the CNES data processing centre (DPCC). We focus on the challenge in terms of organisational aspects, processing capabilities, databases volumetry, and how we deal with these topics.

  19. Enriching public descriptions of marine phages using the Genomic Standards Consortium MIGS standard

    PubMed Central

    Duhaime, Melissa Beth; Kottmann, Renzo; Field, Dawn; Glöckner, Frank Oliver

    2011-01-01

    In any sequencing project, the possible depth of comparative analysis is determined largely by the amount and quality of the accompanying contextual data. The structure, content, and storage of this contextual data should be standardized to ensure consistent coverage of all sequenced entities and facilitate comparisons. The Genomic Standards Consortium (GSC) has developed the “Minimum Information about Genome/Metagenome Sequences (MIGS/MIMS)” checklist for the description of genomes and here we annotate all 30 publicly available marine bacteriophage sequences to the MIGS standard. These annotations build on existing International Nucleotide Sequence Database Collaboration (INSDC) records, and confirm, as expected that current submissions lack most MIGS fields. MIGS fields were manually curated from the literature and placed in XML format as specified by the Genomic Contextual Data Markup Language (GCDML). These “machine-readable” reports were then analyzed to highlight patterns describing this collection of genomes. Completed reports are provided in GCDML. This work represents one step towards the annotation of our complete collection of genome sequences and shows the utility of capturing richer metadata along with raw sequences. PMID:21677864

  20. Mining Connected Data

    NASA Astrophysics Data System (ADS)

    Michel, L.; Motch, C.; Pineau, F. X.

    2009-05-01

    As members of the Survey Science Consortium of the XMM-Newton mission the Strasbourg Observatory is in charge of the real-time cross-correlations of X-ray data with archival catalogs. We also are committed to provide a specific tools to handle these cross-correlations and propose identifications at other wavelengths. In order to do so, we developed a database generator (Saada) managing persitent links and supporting heterogeneous input datasets. This system allows to easily build an archive containing numerous and complex links between individual items [1]. It also offers a powerfull query engine able to select sources on the basis of the properties (existence, distance, colours) of the X-ray-archival associations. We present such a database in operation for the 2XMMi catalogue. This system is flexible enough to provide both a public data interface and a servicing interface which could be used in the framework of the Simbol-X ground segment.

  1. Distributed databases for materials study of thermo-kinetic properties

    NASA Astrophysics Data System (ADS)

    Toher, Cormac

    2015-03-01

    High-throughput computational materials science provides researchers with the opportunity to rapidly generate large databases of materials properties. To rapidly add thermal properties to the AFLOWLIB consortium and Materials Project repositories, we have implemented an automated quasi-harmonic Debye model, the Automatic GIBBS Library (AGL). This enables us to screen thousands of materials for thermal conductivity, bulk modulus, thermal expansion and related properties. The search and sort functions of the online database can then be used to identify suitable materials for more in-depth study using more precise computational or experimental techniques. AFLOW-AGL source code is public domain and will soon be released within the GNU-GPL license.

  2. Growth behind the Mirror: The Family Therapy Consortium's Group Process.

    ERIC Educational Resources Information Center

    Wendorf, Donald J.; And Others

    1985-01-01

    Charts the development of the Family Therapy Consortium, a group that provides supervision and continuing education in family therapy and explores the peer supervision process at work in the consortium. The focus is on individual and group development, which are seen as complementary aspects of the same growth process. (Author/NRB)

  3. The Historically Black Colleges and Universities/Minority Institutions Environmental Technology Consortium annual report 1994--1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-07-01

    The HBCU/MI ET Consortium was established in January 1990, through a Memorandum of Understanding (MOU) among its member institutions. This group of research oriented Historically Black Colleges and Universities and Minority Institutions (HBCU/MIs) agreed to work together to initiate or revise education programs, develop research partnerships with public and private sector organizations, and promote technology development to address the nation`s critical environmental contamination problems. The Consortium`s Research, Education and Technology Transfer (RETT) Plan became the working agenda. The Consortium is a resource for collaboration among the member institutions and with federal an state agencies, national and federal laboratories, industries, (includingmore » small businesses), majority universities, and two and four-year technical colleges. As a group of 17 institutions geographically located in the southern US, the Consortium is well positioned to reach a diverse group of women and minority populations of African Americans, Hispanics and American Indians. This Report provides a status update on activities and achievements in environmental curriculum development, outreach at the K--12 level, undergraduate and graduate education, research and development, and technology transfer.« less

  4. Establishment of Kawasaki disease database based on metadata standard.

    PubMed

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-07-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.

  5. 2016 update of the PRIDE database and its related tools

    PubMed Central

    Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning

    2016-01-01

    The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722

  6. Biodegradation of phenanthrene in bioaugmented microcosm by consortium ASP developed from coastal sediment of Alang-Sosiya ship breaking yard.

    PubMed

    Patel, Vilas; Patel, Janki; Madamwar, Datta

    2013-09-15

    A phenanthrene-degrading bacterial consortium (ASP) was developed using sediment from the Alang-Sosiya shipbreaking yard at Gujarat, India. 16S rRNA gene-based molecular analyses revealed that the bacterial consortium consisted of six bacterial strains: Bacillus sp. ASP1, Pseudomonas sp. ASP2, Stenotrophomonas maltophilia strain ASP3, Staphylococcus sp. ASP4, Geobacillus sp. ASP5 and Alcaligenes sp. ASP6. The consortium was able to degrade 300 ppm of phenanthrene and 1000 ppm of naphthalene within 120 h and 48 h, respectively. Tween 80 showed a positive effect on phenanthrene degradation. The consortium was able to consume maximum phenanthrene at the rate of 46 mg/h/l and degrade phenanthrene in the presence of other petroleum hydrocarbons. A microcosm study was conducted to test the consortium's bioremediation potential. Phenanthrene degradation increased from 61% to 94% in sediment bioaugmented with the consortium. Simultaneously, bacterial counts and dehydrogenase activities also increased in the bioaugmented sediment. These results suggest that microbial consortium bioaugmentation may be a promising technology for bioremediation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Clinical utilization of genomics data produced by the international Pseudomonas aeruginosa consortium

    PubMed Central

    Freschi, Luca; Jeukens, Julie; Kukavica-Ibrulj, Irena; Boyle, Brian; Dupont, Marie-Josée; Laroche, Jérôme; Larose, Stéphane; Maaroufi, Halim; Fothergill, Joanne L.; Moore, Matthew; Winsor, Geoffrey L.; Aaron, Shawn D.; Barbeau, Jean; Bell, Scott C.; Burns, Jane L.; Camara, Miguel; Cantin, André; Charette, Steve J.; Dewar, Ken; Déziel, Éric; Grimwood, Keith; Hancock, Robert E. W.; Harrison, Joe J.; Heeb, Stephan; Jelsbak, Lars; Jia, Baofeng; Kenna, Dervla T.; Kidd, Timothy J.; Klockgether, Jens; Lam, Joseph S.; Lamont, Iain L.; Lewenza, Shawn; Loman, Nick; Malouin, François; Manos, Jim; McArthur, Andrew G.; McKeown, Josie; Milot, Julie; Naghra, Hardeep; Nguyen, Dao; Pereira, Sheldon K.; Perron, Gabriel G.; Pirnay, Jean-Paul; Rainey, Paul B.; Rousseau, Simon; Santos, Pedro M.; Stephenson, Anne; Taylor, Véronique; Turton, Jane F.; Waglechner, Nicholas; Williams, Paul; Thrane, Sandra W.; Wright, Gerard D.; Brinkman, Fiona S. L.; Tucker, Nicholas P.; Tümmler, Burkhard; Winstanley, Craig; Levesque, Roger C.

    2015-01-01

    The International Pseudomonas aeruginosa Consortium is sequencing over 1000 genomes and building an analysis pipeline for the study of Pseudomonas genome evolution, antibiotic resistance and virulence genes. Metadata, including genomic and phenotypic data for each isolate of the collection, are available through the International Pseudomonas Consortium Database (http://ipcd.ibis.ulaval.ca/). Here, we present our strategy and the results that emerged from the analysis of the first 389 genomes. With as yet unmatched resolution, our results confirm that P. aeruginosa strains can be divided into three major groups that are further divided into subgroups, some not previously reported in the literature. We also provide the first snapshot of P. aeruginosa strain diversity with respect to antibiotic resistance. Our approach will allow us to draw potential links between environmental strains and those implicated in human and animal infections, understand how patients become infected and how the infection evolves over time as well as identify prognostic markers for better evidence-based decisions on patient care. PMID:26483767

  8. Object classification and outliers analysis in the forthcoming Gaia mission

    NASA Astrophysics Data System (ADS)

    Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.

    2010-12-01

    Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.

  9. Legal Medicine Information System using CDISC ODM.

    PubMed

    Kiuchi, Takahiro; Yoshida, Ken-ichi; Kotani, Hirokazu; Tamaki, Keiji; Nagai, Hisashi; Harada, Kazuki; Ishikawa, Hirono

    2013-11-01

    We have developed a new database system for forensic autopsies, called the Legal Medicine Information System, using the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM). This system comprises two subsystems, namely the Institutional Database System (IDS) located in each institute and containing personal information, and the Central Anonymous Database System (CADS) located in the University Hospital Medical Information Network Center containing only anonymous information. CDISC ODM is used as the data transfer protocol between the two subsystems. Using the IDS, forensic pathologists and other staff can register and search for institutional autopsy information, print death certificates, and extract data for statistical analysis. They can also submit anonymous autopsy information to the CADS semi-automatically. This reduces the burden of double data entry, the time-lag of central data collection, and anxiety regarding legal and ethical issues. Using the CADS, various studies on the causes of death can be conducted quickly and easily, and the results can be used to prevent similar accidents, diseases, and abuse. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. MetaboLights: towards a new COSMOS of metabolomics data management.

    PubMed

    Steinbeck, Christoph; Conesa, Pablo; Haug, Kenneth; Mahendraker, Tejasvi; Williams, Mark; Maguire, Eamonn; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Salek, Reza M; Griffin, Julian L

    2012-10-01

    Exciting funding initiatives are emerging in Europe and the US for metabolomics data production, storage, dissemination and analysis. This is based on a rich ecosystem of resources around the world, which has been build during the past ten years, including but not limited to resources such as MassBank in Japan and the Human Metabolome Database in Canada. Now, the European Bioinformatics Institute has launched MetaboLights, a database for metabolomics experiments and the associated metadata (http://www.ebi.ac.uk/metabolights). It is the first comprehensive, cross-species, cross-platform metabolomics database maintained by one of the major open access data providers in molecular biology. In October, the European COSMOS consortium will start its work on Metabolomics data standardization, publication and dissemination workflows. The NIH in the US is establishing 6-8 metabolomics services cores as well as a national metabolomics repository. This communication reports about MetaboLights as a new resource for Metabolomics research, summarises the related developments and outlines how they may consolidate the knowledge management in this third large omics field next to proteomics and genomics.

  11. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. Additionally, a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include over 40 papers from studies on the Hawaiian Islands and Mexico, data compilations from archeomagnetic studies and updates to the lake sediment dataset.

  12. Wisconsin Area Planning and Development. Consortium Project, Title I, Higher Education Act 1965.

    ERIC Educational Resources Information Center

    Wisconsin Univ., Madison. Univ. Extension.

    The Consortium for Area Planning and Development was established in 1967 to implement the basic purposes of Title I of the Higher Education Act of 1965. The Consortium's first seminar was held in May 1968 and was attended by 25 project leaders, local and state government officials, technical consultants, and representatives of various institutions…

  13. Acute pancreatitis patient registry to examine novel therapies in clinical experience (APPRENTICE): an international, multicenter consortium for the study of acute pancreatitis.

    PubMed

    Papachristou, Georgios I; Machicado, Jorge D; Stevens, Tyler; Goenka, Mahesh Kumar; Ferreira, Miguel; Gutierrez, Silvia C; Singh, Vikesh K; Kamal, Ayesha; Gonzalez-Gonzalez, Jose A; Pelaez-Luna, Mario; Gulla, Aiste; Zarnescu, Narcis O; Triantafyllou, Konstantinos; Barbu, Sorin T; Easler, Jeffrey; Ocampo, Carlos; Capurso, Gabriele; Archibugi, Livia; Cote, Gregory A; Lambiase, Louis; Kochhar, Rakesh; Chua, Tiffany; Tiwari, Subhash Ch; Nawaz, Haq; Park, Walter G; de-Madaria, Enrique; Lee, Peter J; Wu, Bechien U; Greer, Phil J; Dugum, Mohannad; Koutroumpakis, Efstratios; Akshintala, Venkata; Gougol, Amir

    2017-01-01

    We have established a multicenter international consortium to better understand the natural history of acute pancreatitis (AP) worldwide and to develop a platform for future randomized clinical trials. The AP patient registry to examine novel therapies in clinical experience (APPRENTICE) was formed in July 2014. Detailed web-based questionnaires were then developed to prospectively capture information on demographics, etiology, pancreatitis history, comorbidities, risk factors, severity biomarkers, severity indices, health-care utilization, management strategies, and outcomes of AP patients. Between November 2015 and September 2016, a total of 20 sites (8 in the United States, 5 in Europe, 3 in South America, 2 in Mexico and 2 in India) prospectively enrolled 509 AP patients. All data were entered into the REDCap (Research Electronic Data Capture) database by participating centers and systematically reviewed by the coordinating site (University of Pittsburgh). The approaches and methodology are described in detail, along with an interim report on the demographic results. APPRENTICE, an international collaboration of tertiary AP centers throughout the world, has demonstrated the feasibility of building a large, prospective, multicenter patient registry to study AP. Analysis of the collected data may provide a greater understanding of AP and APPRENTICE will serve as a future platform for randomized clinical trials.

  14. Genome sequence determination and metagenomic characterization of a Dehalococcoides mixed culture grown on cis-1,2-dichloroethene.

    PubMed

    Yohda, Masafumi; Yagi, Osami; Takechi, Ayane; Kitajima, Mizuki; Matsuda, Hisashi; Miyamura, Naoaki; Aizawa, Tomoko; Nakajima, Mutsuyasu; Sunairi, Michio; Daiba, Akito; Miyajima, Takashi; Teruya, Morimi; Teruya, Kuniko; Shiroma, Akino; Shimoji, Makiko; Tamotsu, Hinako; Juan, Ayaka; Nakano, Kazuma; Aoyama, Misako; Terabayashi, Yasunobu; Satou, Kazuhito; Hirano, Takashi

    2015-07-01

    A Dehalococcoides-containing bacterial consortium that performed dechlorination of 0.20 mM cis-1,2-dichloroethene to ethene in 14 days was obtained from the sediment mud of the lotus field. To obtain detailed information of the consortium, the metagenome was analyzed using the short-read next-generation sequencer SOLiD 3. Matching the obtained sequence tags with the reference genome sequences indicated that the Dehalococcoides sp. in the consortium was highly homologous to Dehalococcoides mccartyi CBDB1 and BAV1. Sequence comparison with the reference sequence constructed from 16S rRNA gene sequences in a public database showed the presence of Sedimentibacter, Sulfurospirillum, Clostridium, Desulfovibrio, Parabacteroides, Alistipes, Eubacterium, Peptostreptococcus and Proteocatella in addition to Dehalococcoides sp. After further enrichment, the members of the consortium were narrowed down to almost three species. Finally, the full-length circular genome sequence of the Dehalococcoides sp. in the consortium, D. mccartyi IBARAKI, was determined by analyzing the metagenome with the single-molecule DNA sequencer PacBio RS. The accuracy of the sequence was confirmed by matching it to the tag sequences obtained by SOLiD 3. The genome is 1,451,062 nt and the number of CDS is 1566, which includes 3 rRNA genes and 47 tRNA genes. There exist twenty-eight RDase genes that are accompanied by the genes for anchor proteins. The genome exhibits significant sequence identity with other Dehalococcoides spp. throughout the genome, but there exists significant difference in the distribution RDase genes. The combination of a short-read next-generation DNA sequencer and a long-read single-molecule DNA sequencer gives detailed information of a bacterial consortium. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  15. Community-Supported Data Repositories in Paleobiology: A 'Middle Tail' Between the Geoscientific and Informatics Communities

    NASA Astrophysics Data System (ADS)

    Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.

    2015-12-01

    Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.

  16. Duchenne Regulatory Science Consortium Meeting on Disease Progression Modeling for Duchenne Muscular Dystrophy.

    PubMed

    Larkindale, Jane; Abresch, Richard; Aviles, Enrique; Bronson, Abby; Chin, Janice; Furlong, Pat; Gordish-Dressman, Heather; Habeeb-Louks, Elizabeth; Henricson, Erik; Kroger, Hans; Lynn, Charles; Lynn, Stephen; Martin, Dana; Nuckolls, Glen; Rooney, William; Romero, Klaus; Sweeney, Lee; Vandenborne, Krista; Walter, Glenn; Wolff, Jodi; Wong, Brenda; McDonald, Craig M; Duchenne Regulatory Science Consortium Imaging-Dmd Consortium And The Cinrg Investigators, Members Of The

    2017-01-12

    The Duchenne Regulatory Science Consortium (D-RSC) was established to develop tools to accelerate drug development for DMD.  The resulting tools are anticipated to meet validity requirements outlined by qualification/endorsement pathways at both the U.S. Food and Drug Administration (FDA) and European Medicines Administration (EMA), and will be made available to the drug development community. The initial goals of the consortium include the development of a disease progression model, with the goal of creating a model that would be used to forecast changes in clinically meaningful endpoints, which would inform clinical trial protocol development and data analysis.  Methods: In April of 2016 the consortium and other experts met to formulate plans for the development of the model.  Conclusions: Here we report the results of the meeting, and discussion as to the form of the model that we plan to move forward to develop, after input from the regulatory authorities.

  17. 24 CFR 943.128 - How does a consortium carry out planning and reporting functions?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT PUBLIC HOUSING AGENCY CONSORTIA AND JOINT VENTURES... the consortium agreement, the consortium must submit joint five-year Plans and joint Annual Plans for... the joint PHA Plan. ...

  18. THE FEDERAL INTEGRATED BIOTREATMENT RESEARCH CONSORTIUM (FLASK TO FIELD)

    EPA Science Inventory

    The Federal Integrated Biotreatment Research Consortium (Flask to Field) represented a 7-year concerted effort by several research laboratories to develop bioremediation technologies for contaminated DoD sites. The consortium structure consisted of a director and four thrust are...

  19. The Activities of the European Consortium on Nuclear Data Development and Analysis for Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, U., E-mail: ulrich.fischer@kit.edu; Avrigeanu, M.; Avrigeanu, V.

    This paper presents an overview of the activities of the European Consortium on Nuclear Data Development and Analysis for Fusion. The Consortium combines available European expertise to provide services for the generation, maintenance, and validation of nuclear data evaluations and data files relevant for ITER, IFMIF and DEMO, as well as codes and software tools required for related nuclear calculations.

  20. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  1. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  2. PRESENTED AT TRIANGLE CONSORTIUM FOR REPRODUCTIVE BIOLOGY MEETING IN CHAPEL HILL, NC ON 2/11/2006: SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    EPA Science Inventory

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards sub-fertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of ferti...

  3. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  4. A Four-Dimensional Probabilistic Atlas of the Human Brain

    PubMed Central

    Mazziotta, John; Toga, Arthur; Evans, Alan; Fox, Peter; Lancaster, Jack; Zilles, Karl; Woods, Roger; Paus, Tomas; Simpson, Gregory; Pike, Bruce; Holmes, Colin; Collins, Louis; Thompson, Paul; MacDonald, David; Iacoboni, Marco; Schormann, Thorsten; Amunts, Katrin; Palomero-Gallagher, Nicola; Geyer, Stefan; Parsons, Larry; Narr, Katherine; Kabani, Noor; Le Goualher, Georges; Feidler, Jordan; Smith, Kenneth; Boomsma, Dorret; Pol, Hilleke Hulshoff; Cannon, Tyrone; Kawashima, Ryuta; Mazoyer, Bernard

    2001-01-01

    The authors describe the development of a four-dimensional atlas and reference system that includes both macroscopic and microscopic information on structure and function of the human brain in persons between the ages of 18 and 90 years. Given the presumed large but previously unquantified degree of structural and functional variance among normal persons in the human population, the basis for this atlas and reference system is probabilistic. Through the efforts of the International Consortium for Brain Mapping (ICBM), 7,000 subjects will be included in the initial phase of database and atlas development. For each subject, detailed demographic, clinical, behavioral, and imaging information is being collected. In addition, 5,800 subjects will contribute DNA for the purpose of determining genotype– phenotype–behavioral correlations. The process of developing the strategies, algorithms, data collection methods, validation approaches, database structures, and distribution of results is described in this report. Examples of applications of the approach are described for the normal brain in both adults and children as well as in patients with schizophrenia. This project should provide new insights into the relationship between microscopic and macroscopic structure and function in the human brain and should have important implications in basic neuroscience, clinical diagnostics, and cerebral disorders. PMID:11522763

  5. Patient-Reported Outcome (PRO) Consortium translation process: consensus development of updated best practices.

    PubMed

    Eremenco, Sonya; Pease, Sheryl; Mann, Sarah; Berry, Pamela

    2017-01-01

    This paper describes the rationale and goals of the Patient-Reported Outcome (PRO) Consortium's instrument translation process. The PRO Consortium has developed a number of novel PRO measures which are in the process of qualification by the U.S. Food and Drug Administration (FDA) for use in clinical trials where endpoints based on these measures would support product labeling claims. Given the importance of FDA qualification of these measures, the PRO Consortium's Process Subcommittee determined that a detailed linguistic validation (LV) process was necessary to ensure that all translations of Consortium-developed PRO measures are performed using a standardized approach with the rigor required to meet regulatory and pharmaceutical industry expectations, as well as having a clearly defined instrument translation process that the translation industry can support. The consensus process involved gathering information about current best practices from 13 translation companies with expertise in LV, consolidating the findings to generate a proposed process, and obtaining iterative feedback from the translation companies and PRO Consortium member firms on the proposed process in two rounds of review in order to update existing principles of good practice in LV and to provide sufficient detail for the translation process to ensure consistency across PRO Consortium measures, sponsors, and translation companies. The consensus development resulted in a 12-step process that outlines universal and country-specific new translation approaches, as well as country-specific adaptations of existing translations. The PRO Consortium translation process will play an important role in maintaining the validity of the data generated through these measures by ensuring that they are translated by qualified linguists following a standardized and rigorous process that reflects best practice.

  6. Northern New Jersey Nursing Education Consortium: a partnership for graduate nursing education.

    PubMed

    Quinless, F W; Levin, R F

    1998-01-01

    The purpose of this article is to describe the evolution and implementation of the Northern New Jersey Nursing Education consortium--a consortium of seven member institutions established in 1992. Details regarding the specific functions of the consortium relative to cross-registration of students in graduate courses, financial disbursement of revenue, faculty development activities, student services, library privileges, and institutional research review board mechanisms are described. The authors also review the administrative organizational structure through which the work conducted by the consortium occurs. Both the advantages and disadvantages of such a graduate consortium are explored, and specific examples of recent potential and real conflicts are fully discussed. The authors detail governance and structure of the consortium as a potential model for replication in other environments.

  7. Predicting Novel Bulk Metallic Glasses via High- Throughput Calculations

    NASA Astrophysics Data System (ADS)

    Perim, E.; Lee, D.; Liu, Y.; Toher, C.; Gong, P.; Li, Y.; Simmons, W. N.; Levy, O.; Vlassak, J.; Schroers, J.; Curtarolo, S.

    Bulk metallic glasses (BMGs) are materials which may combine key properties from crystalline metals, such as high hardness, with others typically presented by plastics, such as easy processability. However, the cost of the known BMGs poses a significant obstacle for the development of applications, which has lead to a long search for novel, economically viable, BMGs. The emergence of high-throughput DFT calculations, such as the library provided by the AFLOWLIB consortium, has provided new tools for materials discovery. We have used this data to develop a new glass forming descriptor combining structural factors with thermodynamics in order to quickly screen through a large number of alloy systems in the AFLOWLIB database, identifying the most promising systems and the optimal compositions for glass formation. National Science Foundation (DMR-1436151, DMR-1435820, DMR-1436268).

  8. Breaking barriers through collaboration: the example of the Cell Migration Consortium.

    PubMed

    Horwitz, Alan Rick; Watson, Nikki; Parsons, J Thomas

    2002-10-15

    Understanding complex integrated biological processes, such as cell migration, requires interdisciplinary approaches. The Cell Migration Consortium, funded by a Large-Scale Collaborative Project Award from the National Institute of General Medical Science, develops and disseminates new technologies, data, reagents, and shared information to a wide audience. The development and operation of this Consortium may provide useful insights for those who plan similarly large-scale, interdisciplinary approaches.

  9. Aerospace Workforce Development: The Nebraska Proposal; and Native Connections: A Multi-Consortium Workforce Development Proposal

    NASA Technical Reports Server (NTRS)

    Bowen, Brent; Vlasek, Karisa; Russell, Valerie; Teasdale, Jean; Downing, David R.; deSilva, Shan; Higginbotham, Jack; Duke, Edward; Westenkow, Dwayne; Johnson, Paul

    2004-01-01

    This report contains two sections, each of which describes a proposal for a program at the University of Nebraska. The sections are entitled: 1) Aerospace Workforce Development Augmentation Competition; 2) Native Connections: A Multi-Consortium Workforce Development Proposal.

  10. AphidBase: A centralized bioinformatic resource for annotation of the pea aphid genome

    PubMed Central

    Legeai, Fabrice; Shigenobu, Shuji; Gauthier, Jean-Pierre; Colbourne, John; Rispe, Claude; Collin, Olivier; Richards, Stephen; Wilson, Alex C. C.; Tagu, Denis

    2015-01-01

    AphidBase is a centralized bioinformatic resource that was developed to facilitate community annotation of the pea aphid genome by the International Aphid Genomics Consortium (IAGC). The AphidBase Information System designed to organize and distribute genomic data and annotations for a large international community was constructed using open source software tools from the Generic Model Organism Database (GMOD). The system includes Apollo and GBrowse utilities as well as a wiki, blast search capabilities and a full text search engine. AphidBase strongly supported community cooperation and coordination in the curation of gene models during community annotation of the pea aphid genome. AphidBase can be accessed at http://www.aphidbase.com. PMID:20482635

  11. Establishing a Consortium for the Study of Rare Diseases: The Urea Cycle Disorders Consortium

    PubMed Central

    Seminara, Jennifer; Tuchman, Mendel; Krivitzky, Lauren; Krischer, Jeffrey; Lee, Hye-Seung; LeMons, Cynthia; Baumgartner, Matthias; Cederbaum, Stephen; Diaz, George A.; Feigenbaum, Annette; Gallagher, Renata C.; Harding, Cary O.; Kerr, Douglas S.; Lanpher, Brendan; Lee, Brendan; Lichter-Konecki, Uta; McCandless, Shawn E.; Merritt, J. Lawrence; Oster-Granite, Mary Lou; Seashore, Margretta R.; Stricker, Tamar; Summar, Marshall; Waisbren, Susan; Yudkoff, Marc; Batshaw, Mark L.

    2010-01-01

    The Urea Cycle Disorders Consortium (UCDC) was created as part of a larger network established by the National Institutes of Health to study rare diseases. This paper reviews the UCDC’s accomplishments over the first six years, including how the Consortium was developed and organized, clinical research studies initiated, and the importance of creating partnerships with patient advocacy groups, philanthropic foundations and biotech and pharmaceutical companies. PMID:20188616

  12. Northeast Artificial Intelligence Consortium Annual Report. 1988 Interference Techniques for Knowledge Base Maintenance Using Logic Programming Methodologies. Volume 11

    DTIC Science & Technology

    1989-10-01

    Northeast Aritificial Intelligence Consortium (NAIC). i Table of Contents Execu tive Sum m ary...o g~nIl ’vLr COPY o~ T- RADC-TR-89-259, Vol XI (of twelve) N Interim Report SOctober 1989 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT...ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (If applicable) Intelligence Consortium (NAIC) . Rome Air Development

  13. ICONE: An International Consortium of Neuro Endovascular Centres.

    PubMed

    Raymond, J; White, P; Kallmes, D F; Spears, J; Marotta, T; Roy, D; Guilbert, F; Weill, A; Nguyen, T; Molyneux, A J; Cloft, H; Cekirge, S; Saatci, I; Bracard, S; Meder, J F; Moret, J; Cognard, C; Qureshi, A I; Turk, A S; Berenstein, A

    2008-06-30

    The proliferation of new endovascular devices and therapeutic strategies calls for a prudentand rational evaluation of their clinical benefit. This evaluation must be done in an effective manner and in collaboration with industry. Such research initiative requires organisation a land methodological support to survive and thrive in a competitive environment. We propose the formation of an international consortium, an academic alliance committed to the pursuit of effective neurovascular therapies. Such a consortium would be dedicated to the designand execution of basic science, device developmentand clinical trials. The Consortium is owned and operated by its members. Members are international leaders in neurointerventional research and clinical practice. The Consortium brings competency, knowledge, and expertise to industry as well as to its membership across aspectrum of research initiatives such as: expedited review of clinical trials, protocol development, surveys and systematic reviews; laboratory expertise and support for research design and grant applications to public agencies. Once objectives and protocols are approved, the Consortium provides a stable network of centers capable of timely realization of clinical trials or pre clinical investigations in an optimal environment. The Consortium is a non-profit organization. The potential revenue generated from clientsponsored financial agreements will be redirected to the academic and research objectives of the organization. The Consortium wishes to work inconcert with industry, to support emerging trends in neurovascular therapeutic development. The Consortium is a realistic endeavour optimally structured to promote excellence through scientific appraisal of our treatments, and to accelerate technical progress while maximizing patients' safety and welfare.

  14. Duchenne Regulatory Science Consortium Meeting on Disease Progression Modeling for Duchenne Muscular Dystrophy

    PubMed Central

    Larkindale, Jane; Abresch, Richard; Aviles, Enrique; Bronson, Abby; Chin, Janice; Furlong, Pat; Gordish-Dressman, Heather; Habeeb-Louks, Elizabeth; Henricson, Erik; Kroger, Hans; Lynn, Charles; Lynn, Stephen; Martin, Dana; Nuckolls, Glen; Rooney, William; Romero, Klaus; Sweeney, Lee; Vandenborne, Krista; Walter, Glenn; Wolff, Jodi; Wong, Brenda; McDonald, Craig M.; Duchenne Regulatory Science Consortium, Imaging-DMD Consortium and the CINRG Investigators, members of the

    2017-01-01

    Introduction: The Duchenne Regulatory Science Consortium (D-RSC) was established to develop tools to accelerate drug development for DMD.  The resulting tools are anticipated to meet validity requirements outlined by qualification/endorsement pathways at both the U.S. Food and Drug Administration (FDA) and European Medicines Administration (EMA), and will be made available to the drug development community. The initial goals of the consortium include the development of a disease progression model, with the goal of creating a model that would be used to forecast changes in clinically meaningful endpoints, which would inform clinical trial protocol development and data analysis.  Methods: In April of 2016 the consortium and other experts met to formulate plans for the development of the model.  Conclusions: Here we report the results of the meeting, and discussion as to the form of the model that we plan to move forward to develop, after input from the regulatory authorities. PMID:28228973

  15. Regional Development and the European Consortium of Innovative Universities.

    ERIC Educational Resources Information Center

    Hansen, Saskia Loer; Kokkeler, Ben; van der Sijde, P. C.

    2002-01-01

    The European Consortium of Innovative Universities is a network that shares information not just among universities but with affiliated incubators, research parks, and other regional entities. The learning network contributes to regional development.(JOW)

  16. 15 CFR 918.5 - Eligibility, qualifications, and responsibilities-Sea Grant Regional Consortia.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... qualifying areas which are pertinent to the Consortium's program: (1) Leadership. The Sea Grant Regional... Consortium candidate must have created the management organization to carry on a viable and productive... assistance as the consortium may offer, and (iii) to assist others in developing research and management...

  17. International Arid Lands Consortium: A synopsis of accomplishments

    Treesearch

    Peter F. Ffolliott; Jeffrey O. Dawson; James T. Fisher; Itshack Moshe; Timothy E. Fulbright; W. Carter Johnson; Paul Verburg; Muhammad Shatanawi; Jim P. M. Chamie

    2003-01-01

    The International Arid Lands Consortium (IALC) was established in 1990 to promote research, education, and training activities related to the development, management, and reclamation of arid and semiarid lands in the Southwestern United States, the Middle East, and elsewhere in the world. The Consortium supports the ecological sustainability and environmentally sound...

  18. 77 FR 38770 - Notice of Consortium on “nSoft Consortium”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... DEPARTMENT OF COMMERCE National Institute of Standards and Technology Notice of Consortium on ``n...: NIST will form the ``nSoft Consortium'' to advance and transfer neutron based measurement methods for soft materials manufacturing. The goals of nSoft are to develop neutron- based measurements that...

  19. Review of the cultivation program within the National Alliance for Advanced Biofuels and Bioproducts

    DOE PAGES

    Lammers, Peter J.; Huesemann, Michael; Boeing, Wiebke; ...

    2016-12-12

    The cultivation efforts within the National Alliance for Advanced Biofuels and Bioproducts (NAABB) were developed to provide four major goals for the consortium, which included biomass production for downstream experimentation, development of new assessment tools for cultivation, development of new cultivation reactor technologies, and development of methods for robust cultivation. The NAABB consortium testbeds produced over 1500 kg of biomass for downstream processing. The biomass production included a number of model production strains, but also took into production some of the more promising strains found through the prospecting efforts of the consortium. Cultivation efforts at large scale are intensive andmore » costly, therefore the consortium developed tools and models to assess the productivity of strains under various environmental conditions, at lab scale, and validated these against scaled outdoor production systems. Two new pond-based bioreactor designs were tested for their ability to minimize energy consumption while maintaining, and even exceeding, the productivity of algae cultivation compared to traditional systems. Also, molecular markers were developed for quality control and to facilitate detection of bacterial communities associated with cultivated algal species, including the Chlorella spp. pathogen, Vampirovibrio chlorellavorus, which was identified in at least two test site locations in Arizona and New Mexico. Finally, the consortium worked on understanding methods to utilize compromised municipal wastewater streams for cultivation. In conclusion, this review provides an overview of the cultivation methods and tools developed by the NAABB consortium to produce algae biomass, in robust low energy systems, for biofuel production.« less

  20. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    NASA Astrophysics Data System (ADS)

    Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.; Genevey, A.; Delaney, R.; Baker, P.; Sbarbori, E.

    2005-12-01

    The Magnetics Information Consortium (MagIC) operates an online relational database including both rock and paleomagnetic data. The goal of MagIC is to store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. These nodes provide basic search capabilities based on location, reference, methods applied, material type and geological age, while allowing the user to drill down from sites all the way to the measurements. At each stage, the data can be saved and, if the available data supports it, the data can be visualized by plotting equal area plots, VGP location maps or typical Zijderveld, hysteresis, FORC, and various magnetization and remanence diagrams. All plots are made in SVG (scalable vector graphics) and thus can be saved and easily read into the user's favorite graphics programs without loss of resolution. User contributions to the MagIC database are critical to achieve a useful research tool. We have developed a standard data and metadata template (version 1.6) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate easy population of these templates within Microsoft Excel. These tools allow for the import/export of text files and they provide advanced functionality to manage/edit the data, and to perform various internal checks to high grade the data and to make them ready for uploading. The uploading is all done online by using the MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm that takes only a few minutes to process a contribution of approximately 5,000 data records. After uploading these standardized MagIC template files will be stored in the digital archives of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. In addition to that a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include ~40 papers from studies on the Hawaiian Islands, data compilations from archeomagnetic studies and updates to the lake sediment dataset.

  1. Military Suicide Research Consortium

    DTIC Science & Technology

    2014-10-01

    increasing and decreasing (or even ceasing entirely) across different periods of time but still building on itself with each progressive episode...community from suicide. One study found that social norms, high levels of support, identification with role models , and high self-esteem help pro - tect...in follow-up. o Conducted quality control checks of clinical data . Monitored safety, adverse events for DSMB reporting. Initiated Database

  2. The future application of GML database in GIS

    NASA Astrophysics Data System (ADS)

    Deng, Yuejin; Cheng, Yushu; Jing, Lianwen

    2006-10-01

    In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.

  3. Acute pancreatitis patient registry to examine novel therapies in clinical experience (APPRENTICE): an international, multicenter consortium for the study of acute pancreatitis

    PubMed Central

    Papachristou, Georgios I.; Machicado, Jorge D.; Stevens, Tyler; Goenka, Mahesh Kumar; Ferreira, Miguel; Gutierrez, Silvia C.; Singh, Vikesh K.; Kamal, Ayesha; Gonzalez-Gonzalez, Jose A.; Pelaez-Luna, Mario; Gulla, Aiste; Zarnescu, Narcis O.; Triantafyllou, Konstantinos; Barbu, Sorin T.; Easler, Jeffrey; Ocampo, Carlos; Capurso, Gabriele; Archibugi, Livia; Cote, Gregory A.; Lambiase, Louis; Kochhar, Rakesh; Chua, Tiffany; Tiwari, Subhash Ch.; Nawaz, Haq; Park, Walter G.; de-Madaria, Enrique; Lee, Peter J.; Wu, Bechien U.; Greer, Phil J.; Dugum, Mohannad; Koutroumpakis, Efstratios; Akshintala, Venkata; Gougol, Amir

    2017-01-01

    Background We have established a multicenter international consortium to better understand the natural history of acute pancreatitis (AP) worldwide and to develop a platform for future randomized clinical trials. Methods The AP patient registry to examine novel therapies in clinical experience (APPRENTICE) was formed in July 2014. Detailed web-based questionnaires were then developed to prospectively capture information on demographics, etiology, pancreatitis history, comorbidities, risk factors, severity biomarkers, severity indices, health-care utilization, management strategies, and outcomes of AP patients. Results Between November 2015 and September 2016, a total of 20 sites (8 in the United States, 5 in Europe, 3 in South America, 2 in Mexico and 2 in India) prospectively enrolled 509 AP patients. All data were entered into the REDCap (Research Electronic Data Capture) database by participating centers and systematically reviewed by the coordinating site (University of Pittsburgh). The approaches and methodology are described in detail, along with an interim report on the demographic results. Conclusion APPRENTICE, an international collaboration of tertiary AP centers throughout the world, has demonstrated the feasibility of building a large, prospective, multicenter patient registry to study AP. Analysis of the collected data may provide a greater understanding of AP and APPRENTICE will serve as a future platform for randomized clinical trials. PMID:28042246

  4. Promoting Academic Development: A History of the International Consortium for Educational Development (ICED)

    ERIC Educational Resources Information Center

    Mason O'Connor, Kristine

    2016-01-01

    This essay traces the history of the International Consortium for Educational Development (ICED) through document analysis and email interviews with founding and prominent ICED members. It also provides a summary of the themes and locations of all the ICED conferences.

  5. From Start-up to Sustainability: A Decade of Collaboration to Shape the Future of Nursing.

    PubMed

    Gubrud, Paula; Spencer, Angela G; Wagner, Linda

    This article describes progress the Oregon Consortium for Nursing Education has made toward addressing the academic progression goals provided by the 2011 Institute of Medicine's Future of Nursing: Leading Change, Advancing Health report. The history of the consortium's development is described, emphasizing the creation of an efficient and sustainable organization infrastructure that supports a shared curriculum provided through a community college/university partnership. Data and analysis describing progress and challenges related to supporting a shared curriculum and increasing access and affordability for nursing education across the state are presented. We identified four crucial attributes of maintaining collaborative community that have been cultivated to assure the consortium continues to make progress toward reaching the Institute of Medicine's Future of Nursing goals. Oregon Consortium for Nursing Education provides important lessons learned for other statewide consortiums to consider when developing plans for sustainability.

  6. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Predictors for Perioperative Outcomes following Total Laryngectomy: A University HealthSystem Consortium Discharge Database Study.

    PubMed

    Rutledge, Jonathan W; Spencer, Horace; Moreno, Mauricio A

    2014-07-01

    The University HealthSystem Consortium (UHC) database collects discharge information on patients treated at academic health centers throughout the United States. We sought to use this database to identify outcome predictors for patients undergoing total laryngectomy. A secondary end point was to assess the validity of the UHC's predictive risk mortality model in this cohort of patients. Retrospective review. Academic medical centers (tertiary referral centers) and their affiliate hospitals in the United States. Using the UHC discharge database, we retrieved and analyzed data for 4648 patients undergoing total laryngectomy who were discharged between October 2007 and January 2011 from all of the member institutions. Demographics, comorbidities, institutional data, and outcomes were retrieved. The length of stay and overall costs were significantly higher among female patients (P < .0001), while age was a predictor of intensive care unit stay (P = .014). The overall complication rate was higher among Asians (P = .019) and in patients with anemia and diabetes compared with other comorbidities. The average institutional case load was 1.92 cases/mo; we found an inverse correlation (R = -0.47) between the institutional case load and length of stay (P < .0001). The UHC admit mortality risk estimator was found to be an accurate predictor not only of mortality (P < .0002) but also of intensive care unit admission and complication rate (P < .0001). This study provides an overview of laryngectomy outcomes in a contemporary cohort of patients treated at academic health centers. UHC admit mortality risk is an excellent outcome predictor and a valuable tool for risk stratification in these patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.

  8. The Columbia-Willamette Skill Builders Consortium. Final Performance Report.

    ERIC Educational Resources Information Center

    Portland Community Coll., OR.

    The Columbia-Willamette Skill Builders Consortium was formed in early 1988 in response to a growing awareness of the need for improved workplace literacy training and coordinated service delivery in Northwest Oregon. In June 1990, the consortium received a National Workplace Literacy Program grant to develop and demonstrate such training. The…

  9. 24 CFR 943.118 - What is a consortium?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT PUBLIC HOUSING AGENCY CONSORTIA AND JOINT VENTURES Consortia § 943.118 What is a consortium? A... consortium also submits a joint PHA Plan. The lead agency collects the assistance funds from HUD that would... same fiscal year so that the applicable periods for submission and review of the joint PHA Plan are the...

  10. ENT COBRA (Consortium for Brachytherapy Data Analysis): interdisciplinary standardized data collection system for head and neck patients treated with interventional radiotherapy (brachytherapy).

    PubMed

    Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo

    2016-08-01

    Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.

  11. Infrastructure for the life sciences: design and implementation of the UniProt website.

    PubMed

    Jain, Eric; Bairoch, Amos; Duvaud, Severine; Phan, Isabelle; Redaschi, Nicole; Suzek, Baris E; Martin, Maria J; McGarvey, Peter; Gasteiger, Elisabeth

    2009-05-08

    The UniProt consortium was formed in 2002 by groups from the Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) at Georgetown University, and soon afterwards the website http://www.uniprot.org was set up as a central entry point to UniProt resources. Requests to this address were redirected to one of the three organisations' websites. While these sites shared a set of static pages with general information about UniProt, their pages for searching and viewing data were different. To provide users with a consistent view and to cut the cost of maintaining three separate sites, the consortium decided to develop a common website for UniProt. Following several years of intense development and a year of public beta testing, the http://www.uniprot.org domain was switched to the newly developed site described in this paper in July 2008. The UniProt consortium is the main provider of protein sequence and annotation data for much of the life sciences community. The http://www.uniprot.org website is the primary access point to this data and to documentation and basic tools for the data. These tools include full text and field-based text search, similarity search, multiple sequence alignment, batch retrieval and database identifier mapping. This paper discusses the design and implementation of the new website, which was released in July 2008, and shows how it improves data access for users with different levels of experience, as well as to machines for programmatic access.http://www.uniprot.org/ is open for both academic and commercial use. The site was built with open source tools and libraries. Feedback is very welcome and should be sent to help@uniprot.org. The new UniProt website makes accessing and understanding UniProt easier than ever. The two main lessons learned are that getting the basics right for such a data provider website has huge benefits, but is not trivial and easy to underestimate, and that there is no substitute for using empirical data throughout the development process to decide on what is and what is not working for your users.

  12. Economic Development and Consortia.

    ERIC Educational Resources Information Center

    Watson, Allan; Jordan, Linda

    1999-01-01

    The Dallas (Texas)-based Alliance for Higher Education is a consortium of colleges, universities, corporations, hospitals, and other nonprofit organizations that strategically links business and higher education through distance education initiatives. The consortium has created an infrastructure that supports economic development in the…

  13. Synthesis of rainfall and runoff data used for Texas Department of Transportation Research Projects 0-4193 and 0-4194

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing

    2004-01-01

    In the early 2000s, the Texas Department of Transportation funded several research projects to examine the unit hydrograph and rainfall hyetograph techniques for hydrologic design in Texas for the estimation of design flows for stormwater drainage systems. A research consortium comprised of Lamar University, Texas Tech University, the University of Houston, and the U.S. Geological Survey (USGS), was chosen to examine the unit hydrograph and rainfall hyetograph techniques. Rainfall and runoff data collected by the USGS at 91 streamflow-gaging stations in Texas formed a basis for the research. These data were collected as part of USGS small-watershed projects and urban watershed studies that began in the late 1950s and continued through most of the 1970s; a few gages were in operation in the mid-1980s. Selected hydrologic events from these studies were available in the form of over 220 printed reports, which offered the best aggregation of hydrologic data for the research objectives. Digital versions of the data did not exist. Therefore, significant effort was undertaken by the consortium to manually enter the data into a digital database from the printed record. The rainfall and runoff data for over 1,650 storms were entered. To enhance data integrity, considerable quality-control and quality-assurance efforts were conducted as the database was assembled and after assembly to enhance data integrity. This report documents the database and informs interested parties on its usage.

  14. Radiogenomics Consortium (RGC)

    Cancer.gov

    The Radiogenomics Consortium's hypothesis is that a cancer patient's likelihood of developing toxicity to radiation therapy is influenced by common genetic variations, such as single nucleotide polymorphisms (SNPs).

  15. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    PubMed Central

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  16. 24 CFR 943.126 - What is the relationship between HUD and a consortium?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...

  17. 24 CFR 943.126 - What is the relationship between HUD and a consortium?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...

  18. 24 CFR 943.126 - What is the relationship between HUD and a consortium?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...

  19. 24 CFR 943.126 - What is the relationship between HUD and a consortium?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...

  20. 24 CFR 943.126 - What is the relationship between HUD and a consortium?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...

  1. A Long Island Consortium Takes Shape. Occasional Paper No. 76-1.

    ERIC Educational Resources Information Center

    Taylor, William R.

    This occasional paper, the first in a "new" series, describes the background, activities, and experiences of the Long Island Consortium, a cooperative effort of two-year and four-year colleges committed to organizing a model program of faculty development. The consortium was organized under an initial grant from the Lilly Endowment. In May and…

  2. Baltimore Education Research Consortium: A Consideration of Past, Present, and Future

    ERIC Educational Resources Information Center

    Connolly, Faith; Plank, Stephen; Rone, Tracy

    2012-01-01

    In this paper, we offer an overview of the history and development of the Baltimore Education Research Consortium (BERC). As a part of this overview, we describe challenges and dilemmas encountered during the founding years of this consortium. We also highlight particular benefits or sources of satisfaction we have realized in the course of…

  3. Pacific Eisenhower Mathematics and Science Regional Consortium Final Performance Report, October 1, 1995-February 28, 2001.

    ERIC Educational Resources Information Center

    Pacific Resources for Education and Learning, Honolulu, HI.

    The Pacific Eisenhower Mathematics and Science Regional Consortium was established at Pacific Resources for Education and Learning (PREL) in October, 1992 and completed its second funding cycle in February 2001. The Consortium is a collaboration among PREL, the Curriculum Research and Development Group (CRDG) at the University of Hawaii, and the…

  4. The Relationship between SAT® Scores and Retention to the Fourth Year: 2006 SAT Validity Sample. Statistical Report 2011-6

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2011-01-01

    The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT® for use in college admission. The first sample included first-time, first-year students entering college in fall 2006, with 110 institutions providing students'…

  5. Validity of the SAT® for Predicting Fourth-Year Grades: 2006 SAT Validity Sample. Statistical Report 2011-7

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2006-01-01

    The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT®, which is used in college admission and consists of three sections: critical reading (SAT-CR), mathematics (SAT-M) and writing (SAT-W). This report builds on a body of…

  6. The Relationship between SAT® Scores and Retention to the Second Year: 2008 SAT Validity Sample. Statistical Report 2012-1

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2012-01-01

    The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT®, which consists of three sections: critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), for use in college admission. A study by Mattern and…

  7. CIDR

    Science.gov Websites

    Consortium Developed Arrays Infinium Human Drug Core Array The Illumina nfinium DrugDev Consortium array drug target discovery, validation and treatment response. Detailed Information on Array Infinium Human

  8. PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome

    PubMed Central

    Sarika; Arora, Vasu; Iquebal, M. A.; Rai, Anil; Kumar, Dinesh

    2013-01-01

    Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on ‘three-tier architecture’ that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers’ search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/ PMID:23396298

  9. PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome.

    PubMed

    Sarika; Arora, Vasu; Iquebal, M A; Rai, Anil; Kumar, Dinesh

    2013-01-01

    Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on 'three-tier architecture' that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers' search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/

  10. Biohydrogen production from space crew's waste simulants using thermophilic consolidated bioprocessing.

    PubMed

    Wang, Jia; Bibra, Mohit; Venkateswaran, Kasthuri; Salem, David R; Rathinam, Navanietha Krishnaraj; Gadhamshetty, Venkataraman; Sani, Rajesh K

    2018-05-01

    Human waste simulants were for the first time converted into biohydrogen by a newly developed anaerobic microbial consortium via thermophilic consolidated bioprocessing. Four different BioH 2 -producing consortia (denoted as C1, C2, C3 and C4) were isolated, and developed using human waste simulants as substrate. The thermophilic consortium C3, which contained Thermoanaerobacterium, Caloribacterium, and Caldanaerobius species as the main constituents, showed the highest BioH 2 production (3.999 mmol/g) from human waste simulants under optimized conditions (pH 7.0 and 60 °C). The consortium C3 also produced significant amounts of BioH 2 (5.732 mmol/g and 2.186 mmol/g) using wastewater and activated sludge, respectively. The developed consortium in this study is a promising candidate for H 2 production in space applications as in situ resource utilization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The Implementation of a Staff Development Support System Under Decentralized Management.

    ERIC Educational Resources Information Center

    Chalk, Thomas C.; And Others

    The formation of a consortium of three elementary schools was proposed and initiated to offer inservice teacher education experiences to 45 staff members. The consortium schools shared resources to increase the scope and quality of staff development activities. A staff development program was designed to meet both group (institutional) and…

  12. EarthRef.org: Exploring aspects of a Cyber Infrastructure in Earth Science and Education

    NASA Astrophysics Data System (ADS)

    Staudigel, H.; Koppers, A.; Tauxe, L.; Constable, C.; Helly, J.

    2004-12-01

    EarthRef.org is the common host and (co-) developer of a range of earth science databases and IT resources providing a test bed for a Cyberinfrastructure in Earth Science and Education (CIESE). EarthRef.org data base efforts include in particular the Geochemical Earth Reference Model (GERM), the Magnetics Information Consortium (MagIC), the Educational Resources for Earth Science Education (ERESE) project, the Seamount Catalog, the Mid-Ocean Ridge Catalog, the Radio-Isotope Geochronology (RiG) initiative for CHRONOS, and the Microbial Observatory for Fe oxidizing microbes on Loihi Seamount (FeMO; the most recent development). These diverse databases are developed under a single database umbrella and webserver at the San Diego Supercomputing Center. All the data bases have similar structures, with consistent metadata concepts, a common database layout, and automated upload wizards. Shared resources include supporting databases like an address book, a reference/publication catalog, and a common digital archive making database development and maintenance cost-effective, while guaranteeing interoperability. The EarthRef.org CIESE provides a common umbrella for synthesis information as well as sample-based data, and it bridges the gap between science and science education in middle and high schools, validating the potential for a system wide data infrastructure in a CIESE. EarthRef.org experiences have shown that effective communication with the respective communities is a key part of a successful CIESE facilitating both utility and community buy-in. GERM has been particularly successful at developing a metadata scheme for geochemistry and in the development of a new electronic journal (G-cubed) that has made much progress in data publication and linkages between journals and community data bases. GERM also has worked, through editors and publishers, towards interfacing databases with the publication process, to accomplish a more scholarly and database friendly data publication environment, and to interface with the respective science communities. MagIC has held several workshops that have resulted in an integrated data archival environment using metadata that are interchangeable with the geochemical metadata. MagIC archives a wide array of paleo and rock magnetic directional, intensity and magnetic property data as well as integrating computational tools. ERESE brought together librarians, teachers, and scientists to create an educational environment that supports inquiry driven education and the use of science data. Experiences in EarthRef.org demonstrates the feasibility of an effective, community wide CIESE for data publication, archival and modeling, as well as the outreach to the educational community.

  13. UCMP and the Internet help hospital libraries share resources.

    PubMed

    Dempsey, R; Weinstein, L

    1999-07-01

    The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years.

  14. Completion of the National Land Cover Database (NLCD) 1992–2001 Land Cover Change Retrofit product

    USGS Publications Warehouse

    Fry, J.A.; Coan, Michael; Homer, Collin G.; Meyer, Debra K.; Wickham, J.D.

    2009-01-01

    The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods between these two land cover products must be overcome in order to support direct comparison. The NLCD 1992-2001 Land Cover Change Retrofit product was developed to provide more accurate and useful land cover change data than would be possible by direct comparison of NLCD 1992 and NLCD 2001. For the change analysis method to be both national in scale and timely, implementation required production across many Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) path/rows simultaneously. To meet these requirements, a hybrid change analysis process was developed to incorporate both post-classification comparison and specialized ratio differencing change analysis techniques. At a resolution of 30 meters, the completed NLCD 1992-2001 Land Cover Change Retrofit product contains unchanged pixels from the NLCD 2001 land cover dataset that have been cross-walked to a modified Anderson Level I class code, and changed pixels labeled with a 'from-to' class code. Analysis of the results for the conterminous United States indicated that about 3 percent of the land cover dataset changed between 1992 and 2001.

  15. UCMP and the Internet help hospital libraries share resources.

    PubMed Central

    Dempsey, R; Weinstein, L

    1999-01-01

    The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years. PMID:10427426

  16. From Franchise Network to Consortium: The Evolution and Operation of a New Kind of Further and Higher Education Partnership

    ERIC Educational Resources Information Center

    Bridge, Freda; Fisher, Roy; Webb, Keith

    2003-01-01

    The Consortium for Post-Compulsory Education and Training (CPCET) is a single subject consortium of further education and higher education providers of professional development relating to in-service teacher training for the whole of the post-compulsory sector. Involving more than 30 partners spread across the North of England, CPCET evolved from…

  17. Naphthalene degradation by bacterial consortium (DV-AL) developed from Alang-Sosiya ship breaking yard, Gujarat, India.

    PubMed

    Patel, Vilas; Jain, Siddharth; Madamwar, Datta

    2012-03-01

    Naphthalene degrading bacterial consortium (DV-AL) was developed by enrichment culture technique from sediment collected from the Alang-Sosiya ship breaking yard, Gujarat, India. The 16S rRNA gene based molecular analyzes revealed that the bacterial consortium (DV-AL) consisted of four strains namely, Achromobacter sp. BAB239, Pseudomonas sp. DV-AL2, Enterobacter sp. BAB240 and Pseudomonas sp. BAB241. Consortium DV-AL was able to degrade 1000 ppm of naphthalene in Bushnell Haas medium (BHM) containing peptone (0.1%) as co-substrate with an initial pH of 8.0 at 37°C under shaking conditions (150 rpm) within 24h. Maximum growth rate and naphthalene degradation rate were found to be 0.0389 h(-1) and 80 mg h(-1), respectively. Consortium DV-AL was able to utilize other aromatic and aliphatic hydrocarbons such as benzene, phenol, carbazole, petroleum oil, diesel fuel, and phenanthrene and 2-methyl naphthalene as sole carbon source. Consortium DV-AL was also efficient to degrade naphthalene in the presence of other pollutants such as petroleum hydrocarbons and heavy metals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Searching for religion and mental health studies required health, social science, and grey literature databases.

    PubMed

    Wright, Judy M; Cottrell, David J; Mir, Ghazala

    2014-07-01

    To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Retrospective access to data: the ENGAGE consent experience

    PubMed Central

    Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R

    2010-01-01

    The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium. PMID:20332813

  20. Retrospective access to data: the ENGAGE consent experience.

    PubMed

    Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R

    2010-07-01

    The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium.

  1. Identifying Professional Development Needs of High School Teachers Tasked with Online Course Design

    ERIC Educational Resources Information Center

    Lugar, Debbie J.

    2017-01-01

    To satisfy demand for online learning opportunities at the high school level, 3 school districts in the northeast United States established a consortium to share resources to develop and deliver online courses. High school teachers who volunteered to develop courses for the consortium attempted the task without previous training in online course…

  2. The Lung Image Database Consortium (LIDC): Ensuring the integrity of expert-defined “truth”

    PubMed Central

    Armato, Samuel G.; Roberts, Rachael Y.; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; McLennan, Geoffrey; Engelmann, Roger M.; Bland, Peyton H.; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; Croft, Barbara Y.; Clarke, Laurence P.

    2007-01-01

    Rationale and Objectives Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish “truth” for algorithm development, training, and testing. The integrity of this “truth,” however, must be established before investigators commit to this “gold standard” as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the “truth” collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. Materials and Methods One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the “blinded read phase”), radiologists independently identified and annotated lesions, assigning each to one of three categories: “nodule ≥ 3mm,” “nodule < 3mm,” or “non-nodule ≥ 3mm.” For the second read (the “unblinded read phase”), the same radiologists independently evaluated the same CT scans but with all of the annotations from the previously performed blinded reads presented; each radiologist could add marks, edit or delete their own marks, change the lesion category of their own marks, or leave their marks unchanged. The post-unblinded-read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of (1) identification of potential errors introduced during the complete image annotation process (such as two marks on what appears to be a single lesion or an incomplete nodule contour) and (2) correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. Results A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. Conclusion The establishment of “truth” must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems. PMID:18035275

  3. Significant oral cancer risk associated with low socioeconomic status.

    PubMed

    Warnakulasuriya, Saman

    2009-01-01

    Searches were made for studies in Medline, Medline In-Process and other Non-indexed Citations Embase, CINAHL, PsychINFO, CAB Abstracts 1973-date, EBM Reviews, ACP Journal Club, Cochrane Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Health Management Information Consortium database and Pubmed. Un-published data were also received from the International Head and Neck Cancer Epidemiology Consortium. Studies were identified independently by two reviewers and were included if their subject was oral and/ or oropharyngeal cancer; they used case-control methodology; gave data regarding socioeconomic status (SES; eg, educational attainment, occupational social classification or income) for both cases and controls; and the odds ratio (OR) for any SES measure was presented or could be calculated. Corresponding authors were contacted where there was an indication that data on oral and/ or oropharyngeal cancers could potentially be obtained from the wider cancer definition or grouping presented in the article, or if SES data were collected but had not been presented in the article. Methodological assessment of selected studies was undertaken. Countries where the study was undertaken were classified according to level of development and income as defined by the World Bank. Where available the adjusted OR (or crude OR) with corresponding 95% confidence intervals (CI) were extracted, or were calculated for low compared with high SES categories. Meta-analyses were performed on the following subgroups: SES measure, age, sex, global region, development level, time-period and lifestyle factor adjustments. Sensitivity analyses were conducted based on study methodological issues. Publication bias was assessed using a funnel plot. Forty-one studies met the inclusion criteria and yielded 15,344 cases and 33,852 controls. Compared with individuals who were in high SES strata, the pooled OR for the risk of developing oral cancer were 1.85 (95% CI, 1.60-2.15; N=37 studies) for individuals with low educational attainment versus 1.84 (95% CI, 1.47-2.31; N=14) for those with low occupational social class versus and 2.41 (95% CI, 1.59-3.65; N=5) for people with low incomes. Subgroup analyses showed that low SES was significantly associated with increased oral cancer risk in high- and lower-income countries, across the world, and remained when adjusting for potential behavioural confounders. Oral cancer risk associated with low SES is significant and related to lifestyle risk factors. These results provide evidence to steer

  4. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  5. NLCD tree canopy cover (TCC) maps of the contiguous United States and coastal Alaska

    Treesearch

    Robert Benton; Bonnie Ruefenacht; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley

    2015-01-01

    A tree canopy cover (TCC) map is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...

  6. The Relationship between SAT® Scores and Retention to the Second Year: Replication with the 2010 SAT Validity Sample. Statistical Report 2013-1

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2013-01-01

    The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year. The sample…

  7. The Relationship between SAT Scores and Retention to the Second Year: Replication with 2009 SAT Validity Sample. Statistical Report 2011-3

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2012-01-01

    The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The…

  8. UniProtKB/Swiss-Prot, the Manually Annotated Section of the UniProt KnowledgeBase: How to Use the Entry View.

    PubMed

    Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Schneider, Michel; Bansal, Parit; Bridge, Alan J; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis

    2016-01-01

    The Universal Protein Resource (UniProt, http://www.uniprot.org ) consortium is an initiative of the SIB Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) to provide the scientific community with a central resource for protein sequences and functional information. The UniProt consortium maintains the UniProt KnowledgeBase (UniProtKB), updated every 4 weeks, and several supplementary databases including the UniProt Reference Clusters (UniRef) and the UniProt Archive (UniParc).The Swiss-Prot section of the UniProt KnowledgeBase (UniProtKB/Swiss-Prot) contains publicly available expertly manually annotated protein sequences obtained from a broad spectrum of organisms. Plant protein entries are produced in the frame of the Plant Proteome Annotation Program (PPAP), with an emphasis on characterized proteins of Arabidopsis thaliana and Oryza sativa. High level annotations provided by UniProtKB/Swiss-Prot are widely used to predict annotation of newly available proteins through automatic pipelines.The purpose of this chapter is to present a guided tour of a UniProtKB/Swiss-Prot entry. We will also present some of the tools and databases that are linked to each entry.

  9. Reuse at the Software Productivity Consortium

    NASA Technical Reports Server (NTRS)

    Weiss, David M.

    1989-01-01

    The Software Productivity Consortium is sponsored by 14 aerospace companies as a developer of software engineering methods and tools. Software reuse and prototyping are currently the major emphasis areas. The Methodology and Measurement Project in the Software Technology Exploration Division has developed some concepts for reuse which they intend to develop into a synthesis process. They have identified two approaches to software reuse: opportunistic and systematic. The assumptions underlying the systematic approach, phrased as hypotheses, are the following: the redevelopment hypothesis, i.e., software developers solve the same problems repeatedly; the oracle hypothesis, i.e., developers are able to predict variations from one redevelopment to others; and the organizational hypothesis, i.e., software must be organized according to behavior and structure to take advantage of the predictions that the developers make. The conceptual basis for reuse includes: program families, information hiding, abstract interfaces, uses and information hiding hierarchies, and process structure. The primary reusable software characteristics are black-box descriptions, structural descriptions, and composition and decomposition based on program families. Automated support can be provided for systematic reuse, and the Consortium is developing a prototype reuse library and guidebook. The software synthesis process that the Consortium is aiming toward includes modeling, refinement, prototyping, reuse, assessment, and new construction.

  10. A Consortium for Research Development (A Consortium for Educational Research Comprised of Seven Private Liberal Arts Colleges). Final Report.

    ERIC Educational Resources Information Center

    Johnson, Clifton H.

    The initial program of the consortium, which comprised Fisk University, Houston-Tillotson College, LeMoyne College, Dillard University, Tougaloo College, Talladega College, and Clark College, and which extended from July 1967 to July 1970 with a total budget of $85,000, was to be basic institutional research that would help the seven predominately…

  11. Putting It All Together: Developing College Television Consortia. The State Consortia Model.

    ERIC Educational Resources Information Center

    McAuliffe, Daniel G.

    A model for a television consortium for Connecticut regional community colleges is presented. A successful consortium involves commitment to the purpose of providing access to higher education for nontraditional students, willingness of program developers and administrators to listen to and work with people having divergent viewpoints, adequate…

  12. Pilot Research Summaries, 1967-1970.

    ERIC Educational Resources Information Center

    Casey, James L.; Hayes, Larry K.

    This report contains one-page summaries of a majority of the 134 research studies funded through the Oklahoma Consortium on Research Development. The research covers the whole spectrum of academic topics , from nursing to ecology to art to politics.. Brief summaries of a majority of the 37 development seminars funded through the Consortium are…

  13. Novel LOVD databases for hereditary breast cancer and colorectal cancer genes in the Chinese population.

    PubMed

    Pan, Min; Cong, Peikuan; Wang, Yue; Lin, Changsong; Yuan, Ying; Dong, Jian; Banerjee, Santasree; Zhang, Tao; Chen, Yanling; Zhang, Ting; Chen, Mingqing; Hu, Peter; Zheng, Shu; Zhang, Jin; Qi, Ming

    2011-12-01

    The Human Variome Project (HVP) is an international consortium of clinicians, geneticists, and researchers from over 30 countries, aiming to facilitate the establishment and maintenance of standards, systems, and infrastructure for the worldwide collection and sharing of all genetic variations effecting human disease. The HVP-China Node will build new and supplement existing databases of genetic diseases. As the first effort, we have created a novel variant database of BRCA1 and BRCA2, mismatch repair genes (MMR), and APC genes for breast cancer, Lynch syndrome, and familial adenomatous polyposis (FAP), respectively, in the Chinese population using the Leiden Open Variation Database (LOVD) format. We searched PubMed and some Chinese search engines to collect all the variants of these genes in the Chinese population that have already been detected and reported. There are some differences in the gene variants between the Chinese population and that of other ethnicities. The database is available online at http://www.genomed.org/LOVD/. Our database will appear to users who survey other LOVD databases (e.g., by Google search, or by NCBI GeneTests search). Remote submissions are accepted, and the information is updated monthly. © 2011 Wiley Periodicals, Inc.

  14. The IntAct molecular interaction database in 2012

    PubMed Central

    Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning

    2012-01-01

    IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220

  15. A Novel Cross-Disciplinary Multi-Institute Approach to Translational Cancer Research: Lessons Learned from Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC)

    PubMed Central

    Patel, Ashokkumar A.; Gilbertson, John R.; Showe, Louise C.; London, Jack W.; Ross, Eric; Ochs, Michael F.; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V.; Dhir, Rajiv; Beck, J. Robert; Liebman, Michael; Garcia, Fernando U.; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B.; Becich, Michael J.

    2007-01-01

    Background: The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium’s intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. Methods: The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions’ IRB/HIPAA standards. Results: Currently, this “virtual biorepository” has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium’s web site. Conclusions: The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy. PMID:19455246

  16. BigMouth: a multi-institutional dental data repository

    PubMed Central

    Walji, Muhammad F; Kalenderian, Elsbeth; Stark, Paul C; White, Joel M; Kookal, Krishna K; Phan, Dat; Tran, Duong; Bernstam, Elmer V; Ramoni, Rachel

    2014-01-01

    Few oral health databases are available for research and the advancement of evidence-based dentistry. In this work we developed a centralized data repository derived from electronic health records (EHRs) at four dental schools participating in the Consortium of Oral Health Research and Informatics. A multi-stakeholder committee developed a data governance framework that encouraged data sharing while allowing control of contributed data. We adopted the i2b2 data warehousing platform and mapped data from each institution to a common reference terminology. We realized that dental EHRs urgently need to adopt common terminologies. While all used the same treatment code set, only three of the four sites used a common diagnostic terminology, and there were wide discrepancies in how medical and dental histories were documented. BigMouth was successfully launched in August 2012 with data on 1.1 million patients, and made available to users at the contributing institutions. PMID:24993547

  17. The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience

    PubMed Central

    Akil, Huda; Ascoli, Giorgio A.; Bowden, Douglas M.; Bug, William; Donohue, Duncan E.; Goldberg, David H.; Grafstein, Bernice; Grethe, Jeffrey S.; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N.; Marenco, Luis; Martone, Maryann E.; Miller, Perry L.; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M.; Sternberg, Paul W.; Van Essen, David C.; Williams, Robert W.

    2009-01-01

    With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line. PMID:18946742

  18. The neuroscience information framework: a data and knowledge environment for neuroscience.

    PubMed

    Gardner, Daniel; Akil, Huda; Ascoli, Giorgio A; Bowden, Douglas M; Bug, William; Donohue, Duncan E; Goldberg, David H; Grafstein, Bernice; Grethe, Jeffrey S; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N; Marenco, Luis; Martone, Maryann E; Miller, Perry L; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M; Sternberg, Paul W; Van Essen, David C; Williams, Robert W

    2008-09-01

    With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov , http://neurogateway.org , and other sites as they come on line.

  19. Towards Effective International Work-Integrated Learning Practica in Development Studies: Reflections on the Australian Consortium for "In-Country" Indonesian Studies' Development Studies Professional Practicum

    ERIC Educational Resources Information Center

    Rosser, Andrew

    2012-01-01

    In recent years, overseas work-integrated learning practica have become an increasingly important part of development studies curricula in "Northern" universities. This paper examines the factors that shape pedagogical effectiveness in the provision of such programmes, focusing on the case of the Australian Consortium for…

  20. Traditional Chinese medicine research in the post-genomic era: good practice, priorities, challenges and opportunities.

    PubMed

    Uzuner, Halil; Bauer, Rudolf; Fan, Tai-Ping; Guo, De-An; Dias, Alberto; El-Nezami, Hani; Efferth, Thomas; Williamson, Elizabeth M; Heinrich, Michael; Robinson, Nicola; Hylands, Peter J; Hendry, Bruce M; Cheng, Yung-Chi; Xu, Qihe

    2012-04-10

    GP-TCM is the 1st EU-funded Coordination Action consortium dedicated to traditional Chinese medicine (TCM) research. This paper aims to summarise the objectives, structure and activities of the consortium and introduces the position of the consortium regarding good practice, priorities, challenges and opportunities in TCM research. Serving as the introductory paper for the GP-TCM Journal of Ethnopharmacology special issue, this paper describes the roadmap of this special issue and reports how the main outputs of the ten GP-TCM work packages are integrated, and have led to consortium-wide conclusions. Literature studies, opinion polls and discussions among consortium members and stakeholders. By January 2012, through 3 years of team building, the GP-TCM consortium had grown into a large collaborative network involving ∼200 scientists from 24 countries and 107 institutions. Consortium members had worked closely to address good practice issues related to various aspects of Chinese herbal medicine (CHM) and acupuncture research, the focus of this Journal of Ethnopharmacology special issue, leading to state-of-the-art reports, guidelines and consensus on the application of omics technologies in TCM research. In addition, through an online survey open to GP-TCM members and non-members, we polled opinions on grand priorities, challenges and opportunities in TCM research. Based on the poll, although consortium members and non-members had diverse opinions on the major challenges in the field, both groups agreed that high-quality efficacy/effectiveness and mechanistic studies are grand priorities and that the TCM legacy in general and its management of chronic diseases in particular represent grand opportunities. Consortium members cast their votes of confidence in omics and systems biology approaches to TCM research and believed that quality and pharmacovigilance of TCM products are not only grand priorities, but also grand challenges. Non-members, however, gave priority to integrative medicine, concerned on the impact of regulation of TCM practitioners and emphasised intersectoral collaborations in funding TCM research, especially clinical trials. The GP-TCM consortium made great efforts to address some fundamental issues in TCM research, including developing guidelines, as well as identifying priorities, challenges and opportunities. These consortium guidelines and consensus will need dissemination, validation and further development through continued interregional, interdisciplinary and intersectoral collaborations. To promote this, a new consortium, known as the GP-TCM Research Association, is being established to succeed the 3-year fixed term FP7 GP-TCM consortium and will be officially launched at the Final GP-TCM Congress in Leiden, the Netherlands, in April 2012. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Biostimulation of metal-resistant microbial consortium to remove zinc from contaminated environments.

    PubMed

    Mejias Carpio, Isis E; Franco, Diego Castillo; Zanoli Sato, Maria Inês; Sakata, Solange; Pellizari, Vivian H; Seckler Ferreira Filho, Sidney; Frigi Rodrigues, Debora

    2016-04-15

    Understanding the diversity and metal removal ability of microorganisms associated to contaminated aquatic environments is essential to develop metal remediation technologies in engineered environments. This study investigates through 16S rRNA deep sequencing the composition of a biostimulated microbial consortium obtained from the polluted Tietê River in São Paulo, Brazil. The bacterial diversity of the biostimulated consortium obtained from the contaminated water and sediment was compared to the original sample. The results of the comparative sequencing analyses showed that the biostimulated consortium and the natural environment had γ-Proteobacteria, Firmicutes, and uncultured bacteria as the major classes of microorganisms. The consortium optimum zinc removal capacity, evaluated in batch experiments, was achieved at pH=5 with equilibrium contact time of 120min, and a higher Zn-biomass affinity (KF=1.81) than most pure cultures previously investigated. Analysis of the functional groups found in the consortium demonstrated that amine, carboxyl, hydroxyl, and phosphate groups present in the consortium cells were responsible for zinc uptake. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. IDOMAL: an ontology for malaria.

    PubMed

    Topalis, Pantelis; Mitraka, Elvira; Bujila, Ioana; Deligianni, Elena; Dialynas, Emmanuel; Siden-Kiamos, Inga; Troye-Blomberg, Marita; Louis, Christos

    2010-08-10

    Ontologies are rapidly becoming a necessity for the design of efficient information technology tools, especially databases, because they permit the organization of stored data using logical rules and defined terms that are understood by both humans and machines. This has as consequence both an enhanced usage and interoperability of databases and related resources. It is hoped that IDOMAL, the ontology of malaria will prove a valuable instrument when implemented in both malaria research and control measures. The OBOEdit2 software was used for the construction of the ontology. IDOMAL is based on the Basic Formal Ontology (BFO) and follows the rules set by the OBO Foundry consortium. The first version of the malaria ontology covers both clinical and epidemiological aspects of the disease, as well as disease and vector biology. IDOMAL is meant to later become the nucleation site for a much larger ontology of vector borne diseases, which will itself be an extension of a large ontology of infectious diseases (IDO). The latter is currently being developed in the frame of a large international collaborative effort. IDOMAL, already freely available in its first version, will form part of a suite of ontologies that will be used to drive IT tools and databases specifically constructed to help control malaria and, later, other vector-borne diseases. This suite already consists of the ontology described here as well as the one on insecticide resistance that has been available for some time. Additional components are being developed and introduced into IDOMAL.

  3. ASEAN Mineral Database and Information System (AMDIS)

    NASA Astrophysics Data System (ADS)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  4. Northeast Artificial Intelligence Consortium Annual Report. Volume 2. 1988 Discussing, Using, and Recognizing Plans (NLP)

    DTIC Science & Technology

    1989-10-01

    Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985. [15] N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center

  5. Proceedings -- US Russian workshop on fuel cell technologies (in English;Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, B.; Sylwester, A.

    1996-04-01

    On September 26--28, 1995, Sandia National Laboratories sponsored the first Joint US/Russian Workshop on Fuel Cell Technology at the Marriott Hotel in Albuquerque, New Mexico. This workshop brought together the US and Russian fuel cell communities as represented by users, producers, R and D establishments and government agencies. Customer needs and potential markets in both countries were discussed to establish a customer focus for the workshop. Parallel technical sessions defined research needs and opportunities for collaboration to advance fuel cell technology. A desired outcome of the workshop was the formation of a Russian/American Fuel Cell Consortium to advance fuel cellmore » technology for application in emerging markets in both countries. This consortium is envisioned to involve industry and national labs in both countries. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less

  6. Building psychosocial programming in geriatrics fellowships: a consortium model.

    PubMed

    Adelman, Ronald D; Ansell, Pamela; Breckman, Risa; Snow, Caitlin E; Ehrlich, Amy R; Greene, Michele G; Greenberg, Debra F; Raik, Barrie L; Raymond, Joshua J; Clabby, John F; Fields, Suzanne D; Breznay, Jennifer B

    2011-01-01

    Geriatric psychosocial problems are prevalent and significantly affect the physical health and overall well-being of older adults. Geriatrics fellows require psychosocial education, and yet to date, geriatrics fellowship programs have not developed a comprehensive geriatric psychosocial curriculum. Fellowship programs in the New York tristate area collaboratively created the New York Metropolitan Area Consortium to Strengthen Psychosocial Programming in Geriatrics Fellowships in 2007 to address this shortfall. The goal of the Consortium is to develop model educational programs for geriatrics fellows that highlight psychosocial issues affecting elder care, share interinstitutional resources, and energize fellowship program directors and faculty. In 2008, 2009, and 2010, Consortium faculty collaboratively designed and implemented a psychosocial educational conference for geriatrics fellows. Cumulative participation at the conferences included 146 geriatrics fellows from 20 academic institutions taught by interdisciplinary Consortium faculty. Formal evaluations from the participants indicated that the conference: a) positively affected fellows' knowledge of, interest in, and comfort with psychosocial issues; b) would have a positive impact on the quality of care provided to older patients; and c) encouraged valuable interactions with fellows and faculty from other institutions. The Consortium, as an educational model for psychosocial learning, has a positive impact on geriatrics fellowship training and may be replicable in other localities.

  7. LIFEdb: a database for functional genomics experiments integrating information from external sources, and serving as a sample tracking system

    PubMed Central

    Bannasch, Detlev; Mehrle, Alexander; Glatting, Karl-Heinz; Pepperkok, Rainer; Poustka, Annemarie; Wiemann, Stefan

    2004-01-01

    We have implemented LIFEdb (http://www.dkfz.de/LIFEdb) to link information regarding novel human full-length cDNAs generated and sequenced by the German cDNA Consortium with functional information on the encoded proteins produced in functional genomics and proteomics approaches. The database also serves as a sample-tracking system to manage the process from cDNA to experimental read-out and data interpretation. A web interface enables the scientific community to explore and visualize features of the annotated cDNAs and ORFs combined with experimental results, and thus helps to unravel new features of proteins with as yet unknown functions. PMID:14681468

  8. Introducing a New Interface for the Online MagIC Database by Integrating Data Uploading, Searching, and Visualization

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.

    2013-12-01

    The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.

  9. HEROD: a human ethnic and regional specific omics database.

    PubMed

    Zeng, Xian; Tao, Lin; Zhang, Peng; Qin, Chu; Chen, Shangying; He, Weidong; Tan, Ying; Xia Liu, Hong; Yang, Sheng Yong; Chen, Zhe; Jiang, Yu Yang; Chen, Yu Zong

    2017-10-15

    Genetic and gene expression variations within and between populations and across geographical regions have substantial effects on the biological phenotypes, diseases, and therapeutic response. The development of precision medicines can be facilitated by the OMICS studies of the patients of specific ethnicity and geographic region. However, there is an inadequate facility for broadly and conveniently accessing the ethnic and regional specific OMICS data. Here, we introduced a new free database, HEROD, a human ethnic and regional specific OMICS database. Its first version contains the gene expression data of 53 070 patients of 169 diseases in seven ethnic populations from 193 cities/regions in 49 nations curated from the Gene Expression Omnibus (GEO), the ArrayExpress Archive of Functional Genomics Data (ArrayExpress), the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC). Geographic region information of curated patients was mainly manually extracted from referenced publications of each original study. These data can be accessed and downloaded via keyword search, World map search, and menu-bar search of disease name, the international classification of disease code, geographical region, location of sample collection, ethnic population, gender, age, sample source organ, patient type (patient or healthy), sample type (disease or normal tissue) and assay type on the web interface. The HEROD database is freely accessible at http://bidd2.nus.edu.sg/herod/index.php. The database and web interface are implemented in MySQL, PHP and HTML with all major browsers supported. phacyz@nus.edu.sg. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    NASA Astrophysics Data System (ADS)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  11. African civil society initiatives to drive a biobanking, biosecurity and infrastructure development agenda in the wake of the West African Ebola outbreak

    PubMed Central

    Abayomi, Akin; Gevao, Sahr; Conton, Brian; Deblasio, Pasquale; Katz, Rebecca

    2016-01-01

    This paper describes the formation of a civil society consortium, spurred to action by frustration over the Ebola crises, to facilitate the development of infrastructure and frameworks including policy development to support a harmonized, African approach to health crises on the continent. The Global Emerging Pathogens Treatment Consortium, or GET, is an important example of how African academics, scientists, clinicians and civil society have come together to initiate policy research, multilevel advocacy and implementation of initiatives aimed at building African capacity for timely and effective mitigations strategies against emerging infectious and neglected pathogens, with a focus on biobanking and biosecurity. The consortium has been able to establish it self as a leading voice, drawing attention to scientific infrastructure gaps, the importance of cultural sensitivities, and the power of community engagement. The GET consortium demonstrates how civil society can work together, encourage government engagement and strengthen national and regional efforts to build capacity. PMID:28154625

  12. African civil society initiatives to drive a biobanking, biosecurity and infrastructure development agenda in the wake of the West African Ebola outbreak.

    PubMed

    Abayomi, Akin; Gevao, Sahr; Conton, Brian; Deblasio, Pasquale; Katz, Rebecca

    2016-01-01

    This paper describes the formation of a civil society consortium, spurred to action by frustration over the Ebola crises, to facilitate the development of infrastructure and frameworks including policy development to support a harmonized, African approach to health crises on the continent. The Global Emerging Pathogens Treatment Consortium, or GET, is an important example of how African academics, scientists, clinicians and civil society have come together to initiate policy research, multilevel advocacy and implementation of initiatives aimed at building African capacity for timely and effective mitigations strategies against emerging infectious and neglected pathogens, with a focus on biobanking and biosecurity. The consortium has been able to establish it self as a leading voice, drawing attention to scientific infrastructure gaps, the importance of cultural sensitivities, and the power of community engagement. The GET consortium demonstrates how civil society can work together, encourage government engagement and strengthen national and regional efforts to build capacity.

  13. Learning how to "teach one": A needs assessment of the state of faculty development within the Consortium of the American College of Surgeons Accredited Education Institutes.

    PubMed

    Paige, John T; Khamis, Nehal N; Cooper, Jeffrey B

    2017-11-01

    Developing faculty competencies in curriculum development, teaching, and assessment using simulation is critical for the success of the Consortium of the American College of Surgeons Accredited Education Institutes program. The state of and needs for faculty development in the Accredited Education Institute community are unknown currently. The Faculty Development Committee of the Consortium of the Accredited Education Institutes conducted a survey of Accredited Education Institutes to ascertain what types of practices are used currently, with what frequency, and what needs are perceived for further programs and courses to guide the plan of action for the Faculty Development Committee. The Faculty Development Committee created a 20-question survey with quantitative and qualitative items aimed at gathering data about practices of faculty development and needs within the Consortium of Accredited Education Institutes. The survey was sent to all 83 Accredited Education Institutes program leaders via Survey Monkey in January 2015 with 2 follow-up reminders. Quantitative data were compiled and analyzed using descriptive statistics, and qualitative data were interpreted for common themes. Fifty-four out of the 83 programs (65%) responded to the survey. Two-thirds of the programs had from 1 to 30 faculty teaching at their Accredited Education Institutes. More than three-quarters of the programs taught general surgery, emergency medicine, or obstetrics/gynecology. More than 60% of programs had some form of faculty development, but 91% reported a need to expand their offerings for faculty development with "extreme value" for debriefing skills (70%), assessment (47%), feedback (40%), and curriculum development (40%). Accredited Education Institutes felt that the Consortium could assist with faculty development through such activities as the provision of online resources, sharing of best practices, provision of a blueprint for development of a faculty curriculum and information related to available, credible master programs of faculty development and health professions education. Many Accredited Education Institutes programs are engaged in faculty development activities, but almost all see great needs in faculty development related to debriefing, assessment, and curricular development. These results should help to guide the action and decision-making of the Consortium Faculty Development Committee to improve teaching within the American College of Surgeons Accredited Education Institutes. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. International Arid Lands Consortium's contributions to Madrean Archipelago stewardship

    Treesearch

    Peter F. Ffolliott; Jeffery O. Dawson; Itshack Moshe; Timothy E. Fulbright; E. Carter Johnson; Paul Verburg; Muhannad Shatanawi; Donald F. Caccamise; Jim P. M. Chamie

    2005-01-01

    The International Arid Lands Consortium (IALC) was established in 1990 to promote research, education, and training activities related to the development, management, and reclamation of arid and semiarid lands worldwide. Building on a decade of experience, the IALC continues to increase the knowledge base for managers by funding research, development, and demonstration...

  15. National ATE Commission for Consortium Study and Development. Final Report.

    ERIC Educational Resources Information Center

    Maddox, Kathryn; Mahan, James

    This report by the Commission for Consortium Study and Development is divided into three sections. In section one consortia are defined; the perimeters and benefits to be derived from such cooperation are discussed; and the organization and implementation of consortia are outlined. Section two described student teaching exchange programs. The…

  16. Area Consortium on Training. "Training for Technology" Project, 1982-1983. Final Report.

    ERIC Educational Resources Information Center

    Moock, Lynn D.

    The Area Consortium on Training initiated the Training for Technology Project to fill industry needs for skilled personnel and job needs for economically disadvantaged persons. Major accomplishments included establishment of a training team for economic development and for development of training programs; contacting of more than 100 employers;…

  17. Report to OECD/CERI Policy Group from Pacific Circle Consortium on Phase 1 Activities: 1977-1980.

    ERIC Educational Resources Information Center

    Connell, Helen; Wells, Marguerite

    Established in 1977, the Pacific Circle Consortium is a group of national-level educational research and development agencies from the Organization for Economic Cooperation and Development (OECD) Pacific region countries engaged in cooperative projects intended to improve international understanding and relations. From 1977 to 1980 the Consortium…

  18. (Re)Building a Kidney

    PubMed Central

    Carroll, Thomas J.; Cleaver, Ondine; Gossett, Daniel R.; Hoshizaki, Deborah K.; Hubbell, Jeffrey A.; Humphreys, Benjamin D.; Jain, Sanjay; Jensen, Jan; Kaplan, David L.; Kesselman, Carl; Ketchum, Christian J.; Little, Melissa H.; McMahon, Andrew P.; Shankland, Stuart J.; Spence, Jason R.; Valerius, M. Todd; Wertheim, Jason A.; Wessely, Oliver; Zheng, Ying; Drummond, Iain A.

    2017-01-01

    (Re)Building a Kidney is a National Institute of Diabetes and Digestive and Kidney Diseases-led consortium to optimize approaches for the isolation, expansion, and differentiation of appropriate kidney cell types and the integration of these cells into complex structures that replicate human kidney function. The ultimate goals of the consortium are two-fold: to develop and implement strategies for in vitro engineering of replacement kidney tissue, and to devise strategies to stimulate regeneration of nephrons in situ to restore failing kidney function. Projects within the consortium will answer fundamental questions regarding human gene expression in the developing kidney, essential signaling crosstalk between distinct cell types of the developing kidney, how to derive the many cell types of the kidney through directed differentiation of human pluripotent stem cells, which bioengineering or scaffolding strategies have the most potential for kidney tissue formation, and basic parameters of the regenerative response to injury. As these projects progress, the consortium will incorporate systematic investigations in physiologic function of in vitro and in vivo differentiated kidney tissue, strategies for engraftment in experimental animals, and development of therapeutic approaches to activate innate reparative responses. PMID:28096308

  19. Consortium--A New Direction for Staff Development

    ERIC Educational Resources Information Center

    Cope, Adrienne B.

    1976-01-01

    The shared services and joint planning of the area-wide continuing education program of the Northwest Allegheny Hospitals Corporation (a Consortium of seven acute care and two rehabilitation centers in Allegheny County, Pennsylvania) are described. (LH)

  20. WILLIAMSBURG BROOKLYN ASTHMA AND ENVIRONMENT CONSORTIUM

    EPA Science Inventory

    The Consortium expects to develop a family health promotion model in which organized residents have access to easily understood, scientifically accurate, community-specific information about their health, their environment, and the relationship between the two,...

  1. The UNC-CH MCH Leadership Training Consortium: building the capacity to develop interdisciplinary MCH leaders.

    PubMed

    Dodds, Janice; Vann, William; Lee, Jessica; Rosenberg, Angela; Rounds, Kathleen; Roth, Marcia; Wells, Marlyn; Evens, Emily; Margolis, Lewis H

    2010-07-01

    This article describes the UNC-CH MCH Leadership Consortium, a collaboration among five MCHB-funded training programs, and delineates the evolution of the leadership curriculum developed by the Consortium to cultivate interdisciplinary MCH leaders. In response to a suggestion by the MCHB, five MCHB-funded training programs--nutrition, pediatric dentistry, social work, LEND, and public health--created a consortium with four goals shared by these diverse MCH disciplines: (1) train MCH professionals for field leadership; (2) address the special health and social needs of women, infants, children and adolescents, with emphasis on a public health population-based approach; (3) foster interdisciplinary practice; and (4) assure competencies, such as family-centered and culturally competent practice, needed to serve effectively the MCH population. The consortium meets monthly. Its primary task to date has been to create a leadership curriculum for 20-30 master's, doctoral, and post-doctoral trainees to understand how to leverage personal leadership styles to make groups more effective, develop conflict/facilitation skills, and identify and enhance family-centered and culturally competent organizations. What began as an effort merely to understand shared interests around leadership development has evolved into an elaborate curriculum to address many MCH leadership competencies. The collaboration has also stimulated creative interdisciplinary research and practice opportunities for MCH trainees and faculty. MCHB-funded training programs should make a commitment to collaborate around developing leadership competencies that are shared across disciplines in order to enhance interdisciplinary leadership.

  2. A Syst-OMICS Approach to Ensuring Food Safety and Reducing the Economic Burden of Salmonellosis.

    PubMed

    Emond-Rheault, Jean-Guillaume; Jeukens, Julie; Freschi, Luca; Kukavica-Ibrulj, Irena; Boyle, Brian; Dupont, Marie-Josée; Colavecchio, Anna; Barrere, Virginie; Cadieux, Brigitte; Arya, Gitanjali; Bekal, Sadjia; Berry, Chrystal; Burnett, Elton; Cavestri, Camille; Chapin, Travis K; Crouse, Alanna; Daigle, France; Danyluk, Michelle D; Delaquis, Pascal; Dewar, Ken; Doualla-Bell, Florence; Fliss, Ismail; Fong, Karen; Fournier, Eric; Franz, Eelco; Garduno, Rafael; Gill, Alexander; Gruenheid, Samantha; Harris, Linda; Huang, Carol B; Huang, Hongsheng; Johnson, Roger; Joly, Yann; Kerhoas, Maud; Kong, Nguyet; Lapointe, Gisèle; Larivière, Line; Loignon, Stéphanie; Malo, Danielle; Moineau, Sylvain; Mottawea, Walid; Mukhopadhyay, Kakali; Nadon, Céline; Nash, John; Ngueng Feze, Ida; Ogunremi, Dele; Perets, Ann; Pilar, Ana V; Reimer, Aleisha R; Robertson, James; Rohde, John; Sanderson, Kenneth E; Song, Lingqiao; Stephan, Roger; Tamber, Sandeep; Thomassin, Paul; Tremblay, Denise; Usongo, Valentine; Vincent, Caroline; Wang, Siyun; Weadge, Joel T; Wiedmann, Martin; Wijnands, Lucas; Wilson, Emily D; Wittum, Thomas; Yoshida, Catherine; Youfsi, Khadija; Zhu, Lei; Weimer, Bart C; Goodridge, Lawrence; Levesque, Roger C

    2017-01-01

    The Salmonella Syst-OMICS consortium is sequencing 4,500 Salmonella genomes and building an analysis pipeline for the study of Salmonella genome evolution, antibiotic resistance and virulence genes. Metadata, including phenotypic as well as genomic data, for isolates of the collection are provided through the Salmonella Foodborne Syst-OMICS database (SalFoS), at https://salfos.ibis.ulaval.ca/. Here, we present our strategy and the analysis of the first 3,377 genomes. Our data will be used to draw potential links between strains found in fresh produce, humans, animals and the environment. The ultimate goals are to understand how Salmonella evolves over time, improve the accuracy of diagnostic methods, develop control methods in the field, and identify prognostic markers for evidence-based decisions in epidemiology and surveillance.

  3. Standardizing the nomenclature of Martian impact crater ejecta morphologies

    USGS Publications Warehouse

    Barlow, Nadine G.; Boyce, Joseph M.; Costard, Francois M.; Craddock, Robert A.; Garvin, James B.; Sakimoto, Susan E.H.; Kuzmin, Ruslan O.; Roddy, David J.; Soderblom, Laurence A.

    2000-01-01

    The Mars Crater Morphology Consortium recommends the use of a standardized nomenclature system when discussing Martian impact crater ejecta morphologies. The system utilizes nongenetic descriptors to identify the various ejecta morphologies seen on Mars. This system is designed to facilitate communication and collaboration between researchers. Crater morphology databases will be archived through the U.S. Geological Survey in Flagstaff, where a comprehensive catalog of Martian crater morphologic information will be maintained.

  4. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less

  5. LMSD: LIPID MAPS structure database

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar

    2007-01-01

    The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933

  6. Increasing Nursing Faculty Research: The Iowa Gerontological Nursing Research and Regional Research Consortium Strategies

    PubMed Central

    Maas, Meridean L.; Conn, Vicki; Buckwalter, Kathleen C.; Herr, Keela; Tripp-Reimer, Toni

    2012-01-01

    Purpose Research development and regional consortium strategies are described to assist schools in all countries extend their gerontological nursing research productivity. The strategies, collaboration and mentoring experiences, and outcomes are also shared to illustrate a highly successful approach in increasing faculty programs of nursing research in a focused area of inquiry. Design A case description of gerontological nursing research development and regional consortium strategies in schools of nursing is used. The regional consortium included 17 schools of nursing that are working to increase faculty programs of gerontological nursing research. Survey responses describing publications, presentations, and research funding awards from 65 of 114 total faculty participants in consortium opportunities (pilot and mentoring grant participants, participants in summer scholars’ grantsmanship seminars) were collected annually from 1995 through 2008 to describe outcomes. Findings From 1994 through 2008, faculty participants from the consortium schools who responded to the annual surveys reported a total of 597 gerontological nursing publications, 527 presentations at research conferences, funding of 221 small and internal grants, and 130 external grant awards, including 47R-series grants and 4 K awards. Conclusions There is an urgent need for more nurse faculty with programs of research to inform the health care of persons and support the preparation of nurse clinicians and faculty. The shortage of nurse scientists with active programs of gerontological research is especially serious and limits the number of faculty who are needed to prepare future gerontological nurses, particularly those with doctoral degrees who will assume faculty positions. Further, junior faculty with a gerontological nursing research foci often lack the colleagues, mentors, and environments needed to develop successful research careers. The outcomes of the development and regional consortium strategies suggest that the principles of extending collaboration, mentoring, and resource sharing are useful to augment faculty research opportunities, networking and support, and to increase productivity in individual schools. Clinical Relevance Clinical relevance includes: (a) implications for preparing nurse scientists and academicians who are and will be needed to train nurses for clinical practice, and (b) development of more faculty programs of research to provide systematic evidence to inform nursing practice. PMID:19941587

  7. The "U" in UTEP: Development of the Urban Curriculum and Its Delivery. Second Year Report to the Indiana Department of Education, Teacher Training and Licensing Advisory Committee.

    ERIC Educational Resources Information Center

    Sandoval, Pamela A.

    This report provides an outline of the Urban Teacher Education Program (UTEP), describes curriculum development and delivery, and discusses the progress that has been made toward program goals. UTEP is a school district/university consortium for school-based professional preparation and development. Members of the consortium include: Indiana…

  8. Prostate Cancer Clinical Trials Group: The University of Michigan Site

    DTIC Science & Technology

    2012-04-01

    and fusion-negative strata. UM will be the lead site for this trial with the Univ. of Chicago N01 Phase II consortium as the coordinating center. Ten...sensitive prostate cancer: a University of Chicago Phase II Consortium/Department of Defense Prostate Cancer Clinical Trials Consortium study. JE Ward, T...N01 contract with CTEP (University of Chicago – Early Therapeutics Development with Phase II emphasis group). The Program is committed to creating

  9. E-MSD: improving data deposition and structure quality.

    PubMed

    Tagari, M; Tate, J; Swaminathan, G J; Newman, R; Naim, A; Vranken, W; Kapopoulou, A; Hussain, A; Fillon, J; Henrick, K; Velankar, S

    2006-01-01

    The Macromolecular Structure Database (MSD) (http://www.ebi.ac.uk/msd/) [H. Boutselakis, D. Dimitropoulos, J. Fillon, A. Golovin, K. Henrick, A. Hussain, J. Ionides, M. John, P. A. Keller, E. Krissinel et al. (2003) E-MSD: the European Bioinformatics Institute Macromolecular Structure Database. Nucleic Acids Res., 31, 458-462.] group is one of the three partners in the worldwide Protein DataBank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data [H. Berman, K. Henrick and H. Nakamura (2003) Announcing the worldwide Protein Data Bank. Nature Struct. Biol., 10, 980.]. Since its inception, the MSD group has worked with partners around the world to improve the quality of PDB data, through a clean up programme that addresses inconsistencies and inaccuracies in the legacy archive. The improvements in data quality in the legacy archive have been achieved largely through the creation of a unified data archive, in the form of a relational database that stores all of the data in the wwPDB. The three partners are working towards improving the tools and methods for the deposition of new data by the community at large. The implementation of the MSD database, together with the parallel development of improved tools and methodologies for data harvesting, validation and archival, has lead to significant improvements in the quality of data that enters the archive. Through this and related projects in the NMR and EM realms the MSD continues to improve the quality of publicly available structural data.

  10. Brain Tumor Epidemiology Consortium (BTEC)

    Cancer.gov

    The Brain Tumor Epidemiology Consortium is an open scientific forum organized to foster the development of multi-center, international and inter-disciplinary collaborations that will lead to a better understanding of the etiology, outcomes, and prevention of brain tumors.

  11. TogoTable: cross-database annotation system using the Resource Description Framework (RDF) data model.

    PubMed

    Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko

    2014-07-01

    TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Academic Decision Making: The Consortium of Knox, Franklin and Monmouth Colleges; Volumes I and II. Final Report.

    ERIC Educational Resources Information Center

    Melville, George L.

    This consortium of liberal arts colleges was instrumental in developing and coordinating their research capability through data processing. Forty research and academic development projects were undertaken. Of special importance: The Pass-Fail System, Study Habits in the Three-three calendar, Changing Trends in Attrition, The Weighing of High…

  13. Approaches to Forming a Learning Consortium. Issues to Address. Business Assistance Note #3.

    ERIC Educational Resources Information Center

    Bergman, Terri

    A learning consortium is a group of companies that come together to learn from each other to develop new capabilities, build the skills of their employees, and increase the productive capacities of their enterprises. Most undertake both work force and workplace development efforts. Although the key feature is cooperative learning, most learning…

  14. International Arid Lands Consortium: Better land stewardship in water and watershed management

    Treesearch

    Peter F. Ffolliott; James T. Fisher; Menachem Sachs; Darrell W. DeBoer; Jeffrey O. Dawson; Timothy E. Fulbright; John Tracy

    2000-01-01

    The International Arid Lands Consortium (IALC) was established in 1990 to promote research, education, and training for the development, management, and restoration of arid and semi-arid lands throughout the world. One activity of IALC members and cooperators is to support research and development and demonstration projects that enhance management of these fragile...

  15. Healthy Brain Development: Precursor to Learning. National Health/Education Consortium (1st, Baltimore, Maryland, December 6, 1990).

    ERIC Educational Resources Information Center

    Institute for Educational Leadership, Washington, DC.

    This report presents the proceedings of a consortium at which leading developmental neuroscientists from across the United States and Canada met at Johns Hopkins University to explore the relationship between children's health and learning and to propose policy changes. Early brain development and its relationship to intelligence, learning, and…

  16. High-speed digital wireless battlefield network

    NASA Astrophysics Data System (ADS)

    Dao, Son K.; Zhang, Yongguang; Shek, Eddie C.; van Buer, Darrel

    1999-07-01

    In the past two years, the Digital Wireless Battlefield Network consortium that consists of HRL Laboratories, Hughes Network Systems, Raytheon, and Stanford University has participated in the DARPA TRP program to leverage the efforts in the development of commercial digital wireless products for use in the 21st century battlefield. The consortium has developed an infrastructure and application testbed to support the digitized battlefield. The consortium has implemented and demonstrated this network system. Each member is currently utilizing many of the technology developed in this program in commercial products and offerings. These new communication hardware/software and the demonstrated networking features will benefit military systems and will be applicable to the commercial communication marketplace for high speed voice/data multimedia distribution services.

  17. The role of APACPH (Asia-Pacific Academic Consortium for Public Health) in addressing public health issues in the Asia-Pacific region.

    PubMed

    Liveris, M

    2000-01-01

    The paper covers the establishment of APACPH in 1984 and its subsequent development and achievements. The paper outlines the mission and objectives of the Consortium and brief comparisons are drawn with similar organizations in the European and North American regions. Significant achievements of the Consortium and its contribution to the public health debate are presented. The paper then explores strategies for the future in meeting the challenges of emerging public health issues through collaborative efforts in education, training, research and leadership development in public health in the first century of a new millennium.

  18. Overview of the NASA/Marshall Space Flight Center (MSFC) CFD Consortium for Applications in Propulsion Technology

    NASA Astrophysics Data System (ADS)

    McConnaughey, P. K.; Schutzenhofer, L. A.

    1992-07-01

    This paper presents an overview of the NASA/Marshall Space Flight Center (MSFC) Computational Fluid Dynamics (CFD) Consortium for Applications in Propulsion Technology (CAPT). The objectives of this consortium are discussed, as is the approach of managing resources and technology to achieve these objectives. Significant results by the three CFD CAPT teams (Turbine, Pump, and Combustion) are briefly highlighted with respect to the advancement of CFD applications, the development and evaluation of advanced hardware concepts, and the integration of these results and CFD as a design tool to support Space Transportation Main Engine and National Launch System development.

  19. Automatic lung nodule graph cuts segmentation with deep learning false positive reduction

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei

    2017-03-01

    To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.

  20. Dictionary learning-based CT detection of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong

    2016-10-01

    Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.

  1. Sleep atlas and multimedia database.

    PubMed

    Penzel, T; Kesper, K; Mayer, G; Zulley, J; Peter, J H

    2000-01-01

    The ENN sleep atlas and database was set up on a dedicated server connected to the internet thus providing all services such as WWW, ftp and telnet access. The database serves as a platform to promote the goals of the European Neurological Network, to exchange patient cases for second opinion between experts and to create a case-oriented multimedia sleep atlas with descriptive text, images and video-clips of all known sleep disorders. The sleep atlas consists of a small public and a large private part for members of the consortium. 20 patient cases were collected and presented with educational information similar to published case reports. Case reports are complemented with images, video-clips and biosignal recordings. A Java based viewer for biosignals provided in EDF format was installed in order to move free within the sleep recordings without the need to download the full recording on the client.

  2. Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine

    PubMed Central

    Elsik, Christine G.; Tayal, Aditi; Diesh, Colin M.; Unni, Deepak R.; Emery, Marianne L.; Nguyen, Hung N.; Hagen, Darren E.

    2016-01-01

    We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. PMID:26578564

  3. Midwest Transportation Consortium annual progress report : October 2002.

    DOT National Transportation Integrated Search

    2002-10-01

    From the Director: For the past three years, the Midwest Transportation Consortium (MTC) has focused its efforts in : supporting the development and use of asset management systems in transportation. The MTCs main : focus is on human capital devel...

  4. CORAL DISEASE & HEALTH CONSORTIUM: FINDING SOLUTIONS

    EPA Science Inventory

    The National Oceanic Atmospheric Administration (NOAA), the Environmental Protection Agency (EPA), and the Department of Interior (DOI) developed the framework for a Coral Disease and Health Consortium (CDHC) for the United States Coral Reef Task Force (USCRTF) through an interag...

  5. Medical informatics in medical research - the Severe Malaria in African Children (SMAC) Network's experience.

    PubMed

    Olola, C H O; Missinou, M A; Issifou, S; Anane-Sarpong, E; Abubakar, I; Gandi, J N; Chagomerana, M; Pinder, M; Agbenyega, T; Kremsner, P G; Newton, C R J C; Wypij, D; Taylor, T E

    2006-01-01

    Computers are widely used for data management in clinical trials in the developed countries, unlike in developing countries. Dependable systems are vital for data management, and medical decision making in clinical research. Monitoring and evaluation of data management is critical. In this paper we describe database structures and procedures of systems used to implement, coordinate, and sustain data management in Africa. We outline major lessons, challenges and successes achieved, and recommendations to improve medical informatics application in biomedical research in sub-Saharan Africa. A consortium of experienced research units at five sites in Africa in studying children with disease formed a new clinical trials network, Severe Malaria in African Children. In December 2000, the network introduced an observational study involving these hospital-based sites. After prototyping, relational database management systems were implemented for data entry and verification, data submission and quality assurance monitoring. Between 2000 and 2005, 25,858 patients were enrolled. Failure to meet data submission deadline and data entry errors correlated positively (correlation coefficient, r = 0.82), with more errors occurring when data was submitted late. Data submission lateness correlated inversely with hospital admissions (r = -0.62). Developing and sustaining dependable DBMS, ongoing modifications to optimize data management is crucial for clinical studies. Monitoring and communication systems are vital in multi-center networks for good data management. Data timeliness is associated with data quality and hospital admissions.

  6. Development of a consortium for water security and safety: Planning for an early warning system

    USGS Publications Warehouse

    Clark, R.M.; Adam, N.R.; Atluri, V.; Halem, M.; Vowinkel, E.F.; ,

    2004-01-01

    The events of September 11, 2001 have raised concerns over the safety and security of the Nation's critical infrastructure including water and waste water systems. In June 2002, the U.S. EPA's Region II Office (New York City), in response to concerns over water security, in collaboration with Rutgers University agreed to establish a Regional Drinking Water Security and Safety Consortium (RDWSSC). Members of the consortium include: Rutgers University's Center for Information Management, Integration and Connectivity (CIMIC), American Water (AW), the Passaic Valley Water Commission (PVWC), the North Jersey District Water Supply Commission (NJDWSC), the N.J. Department of Environmental Protection, the U.S. Geological Survey (USGS), and the U.S. Environmental Protection Agencies, Region II Office. In December of 2002 the consortium members signed a memorandum of understanding (MOU) to pursue activities to enhance regional water security. Development of an early warning system for source and distributed water was identified as being of primary importance by the consortium. In this context, an early warning system (EWS) is an integrated system of monitoring stations located at strategic points in a water utilities source waters or in its distribution system, designed to warn against contaminants that might threaten the health and welfare of drinking water consumers. This paper will discuss the consortium's progress in achieving these important objectives.

  7. Decolorization of adsorbed textile dyes by developed consortium of Pseudomonas sp. SUK1 and Aspergillus ochraceus NCIM-1146 under solid state fermentation.

    PubMed

    Kadam, Avinash A; Telke, Amar A; Jagtap, Sujit S; Govindwar, Sanjay P

    2011-05-15

    The objective of this study was to develop consortium using Pseudomonas sp. SUK1 and Aspergillus ochraceus NCIM-1146 to decolorize adsorbed dyes from textile effluent wastewater under solid state fermentation. Among various agricultural wastes rice bran showed dye adsorption up to 90, 62 and 80% from textile dye reactive navy blue HE2R (RNB HE2R) solution, mixture of textile dyes and textile industry wastewater, respectively. Pseudomonas sp. SUK1 and A. ochraceus NCIM-1146 showed 62 and 38% decolorization of RNB HE2R adsorbed on rice bran in 24h under solid state fermentation. However, the consortium of Pseudomonas sp. SUK1 and A. ochraceus NCIM-1146 (consortium-PA) showed 80% decolorization in 24h. The consortium-PA showed effective ADMI removal ratio of adsorbed dyes from textile industry wastewater (77%), mixture of textile dyes (82%) and chemical precipitate of textile dye effluent (CPTDE) (86%). Secretion of extracellular enzymes such as laccase, azoreductase, tyrosinase and NADH-DCIP reductase and their significant induction in the presence of adsorbed dye suggests their role in the decolorization of RNB HE2R. GCMS and HPLC analysis of product suggests the different fates of biodegradation of RNB HE2R when used Pseudomonas sp. SUK1, A. ochraceus NCIM-1146 and consortium PA. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. A Description of the Clinical Proteomic Tumor Analysis Consortium (CPTAC) Common Data Analysis Pipeline

    PubMed Central

    Rudnick, Paul A.; Markey, Sanford P.; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V.; Edwards, Nathan J.; Thangudu, Ratna R.; Ketchum, Karen A.; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E.

    2016-01-01

    The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics datasets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and non-reference markers of cancer. The CPTAC labs have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these datasets were produced from 2D LC-MS/MS analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) Peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false discovery rate (FDR)-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the datasets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level (“rolled-up”) precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ™. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data, enabling comparisons between different samples and cancer types as well as across the major ‘omics fields. PMID:26860878

  9. A Description of the Clinical Proteomic Tumor Analysis Consortium (CPTAC) Common Data Analysis Pipeline.

    PubMed

    Rudnick, Paul A; Markey, Sanford P; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V; Edwards, Nathan J; Thangudu, Ratna R; Ketchum, Karen A; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E

    2016-03-04

    The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics data sets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and nonreference markers of cancer. The CPTAC laboratories have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these data sets were produced from 2D liquid chromatography-tandem mass spectrometry analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false-discovery rate-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the data sets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level ("rolled-up") precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data to enable comparisons between different samples and cancer types as well as across the major omics fields.

  10. Arid and semiarid land stewardship: A 10-year review of accomplishments and contributions of the lnternational Arid Lands Consortium

    Treesearch

    Peter F. Ffolliott; Jeffrey O. Dawson; James T. Fisher; Itshack Moshe; Darrell W. DeBoers; Timothy. E. Fulbright; John Tracy; Abdullah Al Musa; Carter Johnson; Jim P. M. Chamie

    2001-01-01

    The International Arid Lands Consortium (IALC) was established in 1990 to promote research, education, and training activities related to the development, management, and restoration or reclamation of arid and semiarid lands worldwide. The IALC, a leading international organization, supports ecological sustainability and development of arid and semiarid lands. Building...

  11. Filling the Void: The Roles of a Local Applied Research Center and a Statewide Workforce Training Consortium

    ERIC Educational Resources Information Center

    Perniciaro, Richard C.; Nespoli, Lawrence A.; Anbarasan, Sivaraman

    2015-01-01

    This chapter describes the development of an applied research center at Atlantic Cape Community College and a statewide workforce training consortium run by the community college sector in New Jersey. Their contributions to the economic development mission of the colleges as well as their impact on the perception of community colleges by…

  12. Searching for Sustainability in Teacher Education and Educational Research: Experiences from the Baltic and Black Sea Circle Consortium for Educational Research

    ERIC Educational Resources Information Center

    Salite, Ilga

    2015-01-01

    The Baltic and Black Sea Circle Consortium for educational research (BBCC) was established at the beginning of the Decade of Education for Sustainable Development (2005). BBCC has obtained its name in the "Third International Conference Sustainable Development, Culture, Education, in the University of Vechta" (Germany, 2005). The paper…

  13. Faculty Challenges across Rank in Liberal Arts Colleges: A Human Resources Perspective

    ERIC Educational Resources Information Center

    Baker, Vicki L.; Pifer, Meghan J.; Lunsford, Laura G.

    2016-01-01

    This article focuses on the challenges faced by faculty members in a consortium of 13 Liberal Arts Colleges (LACs). We present findings, by academic rank, from a mixed-methods study of faculty development needs and experiences within the consortium. Relying on human resource principles, we advocate a greater focus on the development of the person,…

  14. Making geospatial data in ASF archive readily accessible

    NASA Astrophysics Data System (ADS)

    Gens, R.; Hogenson, K.; Wolf, V. G.; Drew, L.; Stern, T.; Stoner, M.; Shapran, M.

    2015-12-01

    The way geospatial data is searched, managed, processed and used has changed significantly in recent years. A data archive such as the one at the Alaska Satellite Facility (ASF), one of NASA's twelve interlinked Distributed Active Archive Centers (DAACs), used to be searched solely via user interfaces that were specifically developed for its particular archive and data sets. ASF then moved to using an application programming interface (API) that defined a set of routines, protocols, and tools for distributing the geospatial information stored in the database in real time. This provided a more flexible access to the geospatial data. Yet, it was up to user to develop the tools to get a more tailored access to the data they needed. We present two new approaches for serving data to users. In response to the recent Nepal earthquake we developed a data feed for distributing ESA's Sentinel data. Users can subscribe to the data feed and are provided with the relevant metadata the moment a new data set is available for download. The second approach was an Open Geospatial Consortium (OGC) web feature service (WFS). The WFS hosts the metadata along with a direct link from which the data can be downloaded. It uses the open-source GeoServer software (Youngblood and Iacovella, 2013) and provides an interface to include the geospatial information in the archive directly into the user's geographic information system (GIS) as an additional data layer. Both services are run on top of a geospatial PostGIS database, an open-source geographic extension for the PostgreSQL object-relational database (Marquez, 2015). Marquez, A., 2015. PostGIS essentials. Packt Publishing, 198 p. Youngblood, B. and Iacovella, S., 2013. GeoServer Beginner's Guide, Packt Publishing, 350 p.

  15. John Glenn Biomedical Engineering Consortium

    NASA Technical Reports Server (NTRS)

    Nall, Marsha

    2004-01-01

    The John Glenn Biomedical Engineering Consortium is an inter-institutional research and technology development, beginning with ten projects in FY02 that are aimed at applying GRC expertise in fluid physics and sensor development with local biomedical expertise to mitigate the risks of space flight on the health, safety, and performance of astronauts. It is anticipated that several new technologies will be developed that are applicable to both medical needs in space and on earth.

  16. Detection of QT prolongation using a novel electrocardiographic analysis algorithm applying intelligent automation: prospective blinded evaluation using the Cardiac Safety Research Consortium electrocardiographic database.

    PubMed

    Green, Cynthia L; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W

    2012-03-01

    The Cardiac Safety Research Consortium (CSRC) provides both "learning" and blinded "testing" digital electrocardiographic (ECG) data sets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This article reports the first results from a blinded testing data set that examines developer reanalysis of original sponsor-reported core laboratory data. A total of 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 181 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer-measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Developer and sponsor-reported baseline-adjusted data were similar with average differences <1 ms for all intervals. Both developer- and sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject SD for triplicate QTcF measurements was significantly lower for developer- than sponsor-reported data (5.4 and 7.2 ms, respectively; P < .001). The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared with the sponsor-reported study, without the use of a manual core laboratory. These findings indicate that CSRC ECG data sets can be useful for evaluating novel methods and algorithms for determining drug-induced QT/QTc prolongation. Although the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. Copyright © 2012 Mosby, Inc. All rights reserved.

  17. Detection of QT prolongation using a novel ECG analysis algorithm applying intelligent automation: Prospective blinded evaluation using the Cardiac Safety Research Consortium ECG database

    PubMed Central

    Green, Cynthia L.; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W.

    2013-01-01

    Background The Cardiac Safety Research Consortium (CSRC) provides both “learning” and blinded “testing” digital ECG datasets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This manuscript reports the first results from a blinded “testing” dataset that examines Developer re-analysis of original Sponsor-reported core laboratory data. Methods 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 191 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Results Developer and Sponsor-reported baseline-adjusted data were similar with average differences less than 1 millisecond (ms) for all intervals. Both Developer and Sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject standard deviation for triplicate QTcF measurements was significantly lower for Developer than Sponsor-reported data (5.4 ms and 7.2 ms, respectively; p<0.001). Conclusion The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared to the Sponsor-reported study, without the use of a manual core laboratory. These findings indicate CSRC ECG datasets can be useful for evaluating novel methods and algorithms for determining QT/QTc prolongation by drugs. While the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. PMID:22424006

  18. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  19. A survey of Existing V&V, UQ and M&S Data and Knowledge Bases in Support of the Nuclear Energy - Knowledge base for Advanced Modeling and Simulation (NE-KAMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyung Lee; Rich Johnson, Ph.D.; Kimberlyn C. Moussesau

    2011-12-01

    The Nuclear Energy - Knowledge base for Advanced Modeling and Simulation (NE-KAMS) is being developed at the Idaho National Laboratory in conjunction with Bettis Laboratory, Sandia National Laboratories, Argonne National Laboratory, Oak Ridge National Laboratory, Utah State University and others. The objective of this consortium is to establish a comprehensive knowledge base to provide Verification and Validation (V&V) and Uncertainty Quantification (UQ) and other resources for advanced modeling and simulation (M&S) in nuclear reactor design and analysis. NE-KAMS will become a valuable resource for the nuclear industry, the national laboratories, the U.S. NRC and the public to help ensure themore » safe operation of existing and future nuclear reactors. A survey and evaluation of the state-of-the-art of existing V&V and M&S databases, including the Department of Energy and commercial databases, has been performed to ensure that the NE-KAMS effort will not be duplicating existing resources and capabilities and to assess the scope of the effort required to develop and implement NE-KAMS. The survey and evaluation have indeed highlighted the unique set of value-added functionality and services that NE-KAMS will provide to its users. Additionally, the survey has helped develop a better understanding of the architecture and functionality of these data and knowledge bases that can be used to leverage the development of NE-KAMS.« less

  20. Systematic review of systemic sclerosis-specific instruments for the EULAR Outcome Measures Library: An evolutional database model of validated patient-reported outcomes.

    PubMed

    Ingegnoli, Francesca; Carmona, Loreto; Castrejon, Isabel

    2017-04-01

    The EULAR Outcome Measures Library (OML) is a freely available database of validated patient-reported outcomes (PROs). The aim of this study was to provide a comprehensive review of validated PROs specifically developed for systemic sclerosis (SSc) to feed the EULAR OML. A sensitive search was developed in Medline and Embase to identify all validation studies, cohort studies, reviews, or meta-analyses in which the objective were the development or validation of specific PROs evaluating organ involvement, disease activity or damage in SSc. A reviewer screened title and abstracts, selected the studies, and collected data concerning validation using ad hoc forms based on the COSMIN checklist. From 13,140 articles captured, 74 met the predefined criteria. After excluding two instruments as they were unavailable in English the selected 23 studies provided information on seven SSc-specific PROs on different SSc domains: burden of illness (symptom burden index), functional status (Scleroderma Assessment Questionnaire), functional ability (scleroderma Functional Score), Raynaud's phenomenon (Raynaud's condition score), mouth involvement (Mouth Handicap in SSc), gastro-intestinal involvement (University of California Los Angeles-Scleroderma Clinical Trial Consortium Gastro-Intestinal tract 2.0), and skin involvement (skin self-assessment). Each of them is partially validated and has different psychometric requirements. Seven SSc-specific PROs have a minimum validation and were included in the EULAR OML. Further development in the area of disease-specific PROs in SSc is warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. The eNanoMapper database for nanomaterial safety information

    PubMed Central

    Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413

  2. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.

    PubMed

    Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver

    2010-06-30

    Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.

  3. The eNanoMapper database for nanomaterial safety information.

    PubMed

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).

  4. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission

    PubMed Central

    2010-01-01

    Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175

  5. An expression database for roots of the model legume Medicago truncatula under salt stress

    PubMed Central

    2009-01-01

    Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/. PMID:19906315

  6. An expression database for roots of the model legume Medicago truncatula under salt stress.

    PubMed

    Li, Daofeng; Su, Zhen; Dong, Jiangli; Wang, Tao

    2009-11-11

    Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.

  7. An environmental database for Venice and tidal zones

    NASA Astrophysics Data System (ADS)

    Macaluso, L.; Fant, S.; Marani, A.; Scalvini, G.; Zane, O.

    2003-04-01

    The natural environment is a complex, highly variable and physically non reproducible system (not in laboratory, nor in a confined territory). Environmental experimental studies are thus necessarily based on field measurements distributed in time and space. Only extensive data collections can provide the representative samples of the system behavior which are essential for scientific advancement. The assimilation of large data collections into accessible archives must necessarily be implemented in electronic databases. In the case of tidal environments in general, and of the Venice lagoon in particular, it is useful to establish a database, freely accessible to the scientific community, documenting the dynamics of such systems and their response to anthropic pressures and climatic variability. At the Istituto Veneto di Scienze, Lettere ed Arti in Venice (Italy) two internet environmental databases has been developed: one collects information regarding in detail the Venice lagoon; the other co-ordinate the research consortium of the "TIDE" EU RTD project, that attends to three different tidal areas: Venice Lagoon (Italy), Morecambe Bay (England), and Forth Estuary (Scotland). The archives may be accessed through the URL: www.istitutoveneto.it. The first one is freely available and applies to anyone is interested. It is continuously updated and has been structured in order to promote documentation concerning Venetian environment and disseminate this information for educational purposes (see "Dissemination" section). The second one is supplied by scientists and engineers working on this tidal system for various purposes (scientific, management, conservation purposes, etc.); it applies to interested researchers and grows with their own contributions. Both intend to promote scientific communication, to contribute to the realization of a distributed information system collecting homogeneous themes, and to initiate the interconnection among databases regarding different kinds of environment.

  8. A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.

    PubMed

    Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong

    2018-01-01

    The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.

  9. Sharing and reusing cardiovascular anatomical models over the Web: a step towards the implementation of the virtual physiological human project.

    PubMed

    Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas

    2010-06-28

    Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.

  10. Computational Identification Of CDR3 Sequence Archetypes Among Immunoglobulin Sequences in Chronic Lymphocytic Leukemia

    PubMed Central

    Messmer, Bradley T; Raphael, Benjamin J; Aerni, Sarah J; Widhopf, George F; Rassenti, Laura Z; Gribben, John G; Kay, Neil E; Kipps, Thomas J

    2009-01-01

    The leukemia cells of unrelated patients with chronic lymphocytic leukemia (CLL) display a restricted repertoire of immunoglobulin (Ig) gene rearrangements with preferential usage of certain Ig gene segments. We developed a computational method to rigorously quantify biases in Ig sequence similarity in large patient databases and to identify groups of patients with unusual levels of sequence similarity. We applied our method to sequences from 1577 CLL patients through the CLL Research Consortium (CRC), and identified 67 similarity groups into which roughly 20% of all patients could be assigned. Immunoglobulin light chain class was highly correlated within all groups and light chain gene usage was similar within sets. Surprisingly, over 40% of the identified groups were composed of somatically mutated genes. This study significantly expands the evidence that antigen selection shapes the Ig repertoire in CLL. PMID:18640719

  11. Computational identification of CDR3 sequence archetypes among immunoglobulin sequences in chronic lymphocytic leukemia.

    PubMed

    Messmer, Bradley T; Raphael, Benjamin J; Aerni, Sarah J; Widhopf, George F; Rassenti, Laura Z; Gribben, John G; Kay, Neil E; Kipps, Thomas J

    2009-03-01

    The leukemia cells of unrelated patients with chronic lymphocytic leukemia (CLL) display a restricted repertoire of immunoglobulin (Ig) gene rearrangements with preferential usage of certain Ig gene segments. We developed a computational method to rigorously quantify biases in Ig sequence similarity in large patient databases and to identify groups of patients with unusual levels of sequence similarity. We applied our method to sequences from 1577 CLL patients through the CLL Research Consortium (CRC), and identified 67 similarity groups into which roughly 20% of all patients could be assigned. Immunoglobulin light chain class was highly correlated within all groups and light chain gene usage was similar within sets. Surprisingly, over 40% of the identified groups were composed of somatically mutated genes. This study significantly expands the evidence that antigen selection shapes the Ig repertoire in CLL.

  12. Toppar: an interactive browser for viewing association study results.

    PubMed

    Juliusdottir, Thorhildur; Banasik, Karina; Robertson, Neil R; Mott, Richard; McCarthy, Mark I

    2018-06-01

    Data integration and visualization help geneticists make sense of large amounts of data. To help facilitate interpretation of genetic association data we developed Toppar, a customizable visualization tool that stores results from association studies and enables browsing over multiple results, by combining features from existing tools and linking to appropriate external databases. Detailed information on Toppar's features and functionality are on our website http://mccarthy.well.ox.ac.uk/toppar/docs along with instructions on how to download, install and run Toppar. Our online version of Toppar is accessible from the website and can be test-driven using Firefox, Safari or Chrome on sub-sets of publicly available genome-wide association study anthropometric waist and body mass index data (Locke et al., 2015; Shungin et al., 2015) from the Genetic Investigation of ANthropometric Traits consortium. totajuliusd@gmail.com.

  13. A User's Guide to the Encyclopedia of DNA Elements (ENCODE)

    PubMed Central

    2011-01-01

    The mission of the Encyclopedia of DNA Elements (ENCODE) Project is to enable the scientific and medical communities to interpret the human genome sequence and apply it to understand human biology and improve health. The ENCODE Consortium is integrating multiple technologies and approaches in a collective effort to discover and define the functional elements encoded in the human genome, including genes, transcripts, and transcriptional regulatory regions, together with their attendant chromatin states and DNA methylation patterns. In the process, standards to ensure high-quality data have been implemented, and novel algorithms have been developed to facilitate analysis. Data and derived results are made available through a freely accessible database. Here we provide an overview of the project and the resources it is generating and illustrate the application of ENCODE data to interpret the human genome. PMID:21526222

  14. Ventilator-Related Adverse Events: A Taxonomy and Findings From 3 Incident Reporting Systems.

    PubMed

    Pham, Julius Cuong; Williams, Tamara L; Sparnon, Erin M; Cillie, Tam K; Scharen, Hilda F; Marella, William M

    2016-05-01

    In 2009, researchers from Johns Hopkins University's Armstrong Institute for Patient Safety and Quality; public agencies, including the FDA; and private partners, including the Emergency Care Research Institute and the University HealthSystem Consortium (UHC) Safety Intelligence Patient Safety Organization, sought to form a public-private partnership for the promotion of patient safety (P5S) to advance patient safety through voluntary partnerships. The study objective was to test the concept of the P5S to advance our understanding of safety issues related to ventilator events, to develop a common classification system for categorizing adverse events related to mechanical ventilators, and to perform a comparison of adverse events across different adverse event reporting systems. We performed a cross-sectional analysis of ventilator-related adverse events reported in 2012 from the following incident reporting systems: the Pennsylvania Patient Safety Authority's Patient Safety Reporting System, UHC's Safety Intelligence Patient Safety Organization database, and the FDA's Manufacturer and User Facility Device Experience database. Once each organization had its dataset of ventilator-related adverse events, reviewers read the narrative descriptions of each event and classified it according to the developed common taxonomy. A Pennsylvania Patient Safety Authority, FDA, and UHC search provided 252, 274, and 700 relevant reports, respectively. The 3 event types most commonly reported to the UHC and the Pennsylvania Patient Safety Authority's Patient Safety Reporting System databases were airway/breathing circuit issue, human factor issues, and ventilator malfunction events. The top 3 event types reported to the FDA were ventilator malfunction, power source issue, and alarm failure. Overall, we found that (1) through the development of a common taxonomy, adverse events from 3 reporting systems can be evaluated, (2) the types of events reported in each database were related to the purpose of the database and the source of the reports, resulting in significant differences in reported event categories across the 3 systems, and (3) a public-private collaboration for investigating ventilator-related adverse events under the P5S model is feasible. Copyright © 2016 by Daedalus Enterprises.

  15. Overcoming Species Boundaries in Peptide Identification with Bayesian Information Criterion-driven Error-tolerant Peptide Search (BICEPS)*

    PubMed Central

    Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno

    2012-01-01

    Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179

  16. The Pittsburgh-Based Project To Train Educational R & D Personnel. Research Training Through a Multiple System Consortium, Paper Number 1.

    ERIC Educational Resources Information Center

    Heathers, Glen

    The Learning Research and Development Center at the University of Pittsburgh, as part of a consortium of 15 educational agencies, is the prime contractor for a project to design, conduct, and diffuse training programs for educational R & D personnel. Four training programs in the areas of curriculum development and the design and conduct of local…

  17. The TV Development Concept Papers. Preliminary Working Draft. Occasional Papers, Vol. 1, No. 4.

    ERIC Educational Resources Information Center

    To Educate the People Consortium, Detroit, MI.

    This working paper is an intermediary stage in the development of a new telecurriculum for the To Educate the People Consortium. Section 1 is an introduction to the process. Section 2 contains the outlines of 18 courses in the telecurriculum based on concept papers submitted by teams from the consortium. Each course is in the format of six 1-hour…

  18. Challenges for developing RHIOs in rural America: a study in Appalachian Ohio.

    PubMed

    Phillips, Brian O; Welch, Elissa E

    2007-01-01

    A healthy population is essential for the socioeconomic success of the Appalachian region and other rural, underserved areas in the United States. However, rural communities are only beginning to deploy the advanced health information technologies being used by larger urban institutions. Regional health information organizations have the potential to be the building blocks that will harmonize HIT exchange on a national scale. But there are many challenges to developing RHIOs in rural communities. In 2004, the Ohio University College of Osteopathic Medicine convened the Appalachian Regional Informatics Consortium, a community-based cross-section of healthcare providers in southeastern Ohio. The consortium was awarded an Integrated Advanced Information Management Systems planning grant from the National Institutes of Health to investigate rural RHIO development, the first such rural project. This article examines the consortium and the challenges facing rural RHIO development in Appalachian Ohio.

  19. The project office of the Gaia Data Processing and Analysis Consortium

    NASA Astrophysics Data System (ADS)

    Mercier, E.; Els, S.; Gracia, G.; O'Mullane, W.; Lock, T.; Comoretto, G.

    2010-07-01

    Gaia is Europe's future astrometry satellite which is currently under development. The data collected by Gaia will be treated and analyzed by the "Data Processing and Analysis Consortium" (DPAC). DPAC consists of over 400 scientists in more than 22 countries, which are currently developing the required data reduction, analysis and handling algorithms and routines. DPAC is organized in Coordination Units (CU's) and Data Processing Centres (DPCs). Each of these entities is individually responsible for the development of software for the processing of the different data. In 2008, the DPAC Project Office (PO) has been set-up with the task to manage the day-to-day activities of the consortium including implementation, development and operations. This paper describes the tasks DPAC faces and the role of the DPAC PO in the Gaia framework and how it supports the DPAC entities in their effort to fulfill the Gaia promise.

  20. Managing Rock and Paleomagnetic Data Flow with the MagIC Database: from Measurement and Analysis to Comprehensive Archive and Visualization

    NASA Astrophysics Data System (ADS)

    Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.

    2008-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user-defined compilations. Query results are displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, when supported by the data, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams.

  1. The simcyp population based simulator: architecture, implementation, and quality assurance.

    PubMed

    Jamei, Masoud; Marciniak, Steve; Edwards, Duncan; Wragg, Kris; Feng, Kairui; Barnett, Adrian; Rostami-Hodjegan, Amin

    2013-01-01

    Developing a user-friendly platform that can handle a vast number of complex physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) models both for conventional small molecules and larger biologic drugs is a substantial challenge. Over the last decade the Simcyp Population Based Simulator has gained popularity in major pharmaceutical companies (70% of top 40 - in term of R&D spending). Under the Simcyp Consortium guidance, it has evolved from a simple drug-drug interaction tool to a sophisticated and comprehensive Model Based Drug Development (MBDD) platform that covers a broad range of applications spanning from early drug discovery to late drug development. This article provides an update on the latest architectural and implementation developments within the Simulator. Interconnection between peripheral modules, the dynamic model building process and compound and population data handling are all described. The Simcyp Data Management (SDM) system, which contains the system and drug databases, can help with implementing quality standards by seamless integration and tracking of any changes. This also helps with internal approval procedures, validation and auto-testing of the new implemented models and algorithms, an area of high interest to regulatory bodies.

  2. Genegis: Computational Tools for Spatial Analyses of DNA Profiles with Associated Photo-Identification and Telemetry Records of Marine Mammals

    DTIC Science & Technology

    2013-09-30

    profiles of right whales Eubalaena glacialis from the North Atlantic Right Whale Consortium; 2) DNA profiles of sperm whales Physeter macrocephalus...of other cetacean databases in Wildbook format (e.g., North Atlantic right whales, sperm whales and Hector’s dolphins); 8) Supported continuing...of sperm whales, using samples collected during the 5-year Voyage of the Odyssey; and 3) DNA profiles of Hector’s dolphins from Cloudy Bay, New

  3. Clinical assessment of acute lateral ankle sprain injuries (ROAST): 2019 consensus statement and recommendations of the International Ankle Consortium.

    PubMed

    Delahunt, Eamonn; Bleakley, Chris M; Bossard, Daniela S; Caulfield, Brian M; Docherty, Carrie L; Doherty, Cailbhe; Fourchet, François; Fong, Daniel T; Hertel, Jay; Hiller, Claire E; Kaminski, Thomas W; McKeon, Patrick O; Refshauge, Kathryn M; Remus, Alexandria; Verhagen, Evert; Vicenzino, Bill T; Wikstrom, Erik A; Gribble, Phillip A

    2018-06-09

    Lateral ankle sprain injury is the most common musculoskeletal injury incurred by individuals who participate in sports and recreational physical activities. Following initial injury, a high proportion of individuals develop long-term injury-associated symptoms and chronic ankle instability. The development of chronic ankle instability is consequent on the interaction of mechanical and sensorimotor insufficiencies/impairments that manifest following acute lateral ankle sprain injury. To reduce the propensity for developing chronic ankle instability, clinical assessments should evaluate whether patients in the acute phase following lateral ankle sprain injury exhibit any mechanical and/or sensorimotor impairments. This modified Delphi study was undertaken under the auspices of the executive committee of the International Ankle Consortium. The primary aim was to develop recommendations, based on expert (n=14) consensus, for structured clinical assessment of acute lateral ankle sprain injuries. After two modified Delphi rounds, consensus was achieved on the clinical assessment of acute lateral ankle sprain injuries. Consensus was reached on a minimum standard clinical diagnostic assessment. Key components of this clinical diagnostic assessment include: establishing the mechanism of injury, as well as the assessment of ankle joint bones and ligaments. Through consensus, the expert panel also developed the International Ankle Consortium Rehabilitation-Oriented ASsessmenT (ROAST). The International Ankle Consortium ROAST will help clinicians identify mechanical and/or sensorimotor impairments that are associated with chronic ankle instability. This consensus statement from the International Ankle Consortium aims to be a key resource for clinicians who regularly assess individuals with acute lateral ankle sprain injuries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization

    NASA Astrophysics Data System (ADS)

    Daglis, I. A.; Bourdarie, S.; Khotyaintsev, Y.; Santolik, O.; Horne, R.; Mann, I.; Turner, D.; Anastasiadis, A.; Angelopoulos, V.; Balasis, G.; Chatzichristou, E.; Cully, C.; Georgiou, M.; Glauert, S.; Grison, B.; Kolmasova, I.; Lazaro, D.; Macusova, E.; Maget, V.; Papadimitriou, C.; Ropokis, G.; Sandberg, I.; Usanova, M.

    2012-09-01

    We present the concept, objectives and expected impact of the MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization) project, which is being implemented by a consortium of seven institutions (five European, one Canadian and one US) with support from the European Community's Seventh Framework Programme. The MAARBLE project employs multi-spacecraft monitoring of the geospace environment, complemented by ground-based monitoring, in order to analyze and assess the physical mechanisms leading to radiation belt particle energization and loss. Particular attention is paid to the role of ULF/VLF waves. A database containing properties of the waves is being created and will be made available to the scientific community. Based on the wave database, a statistical model of the wave activity dependent on the level of geomagnetic activity, solar wind forcing, and magnetospheric region will be developed. Furthermore, we will incorporate multi-spacecraft particle measurements into data assimilation tools, aiming at a new understanding of the causal relationships between ULF/VLF waves and radiation belt dynamics. Data assimilation techniques have been proven to be a valuable tool in the field of radiation belts, able to guide 'the best' estimate of the state of a complex system.

  5. An Analysis of the Processes of Developing a Consortium.

    ERIC Educational Resources Information Center

    Sagan, Edgar L.

    This report provides some basic guidelines for planning and establishing a consortium. Systems analysis was used to study 5 consortia, determine their objectives, identify applicable system variables, and ascertain the contribution each variable must make to achieve organizational objectives. The consortia were the Central States College…

  6. Global Education for the 21st Century: The GU Consortium.

    ERIC Educational Resources Information Center

    Utsumi, Takeshi; And Others

    1989-01-01

    Proposes a worldwide educational network with a partnership of universities, businesses, community organizations, students, and workers. Describes the four goals of the Global Electronic University Consortium (GU): the globalization of educational opportunities, support of research and development, use of global-scale tools, and the globalization…

  7. FANTOM5 CAGE profiles of human and mouse reprocessed for GRCh38 and GRCm38 genome assemblies.

    PubMed

    Abugessaisa, Imad; Noguchi, Shuhei; Hasegawa, Akira; Harshbarger, Jayson; Kondo, Atsushi; Lizio, Marina; Severin, Jessica; Carninci, Piero; Kawaji, Hideya; Kasukawa, Takeya

    2017-08-29

    The FANTOM5 consortium described the promoter-level expression atlas of human and mouse by using CAGE (Cap Analysis of Gene Expression) with single molecule sequencing. In the original publications, GRCh37/hg19 and NCBI37/mm9 assemblies were used as the reference genomes of human and mouse respectively; later, the Genome Reference Consortium released newer genome assemblies GRCh38/hg38 and GRCm38/mm10. To increase the utility of the atlas in forthcoming researches, we reprocessed the data to make them available on the recent genome assemblies. The data include observed frequencies of transcription starting sites (TSSs) based on the realignment of CAGE reads, and TSS peaks that are converted from those based on the previous reference. Annotations of the peak names were also updated based on the latest public databases. The reprocessed results enable us to examine frequencies of transcription initiations on the recent genome assemblies and to refer promoters with updated information across the genome assemblies consistently.

  8. Deeper insight into the structure of the anaerobic digestion microbial community; the biogas microbiome database is expanded with 157 new genomes.

    PubMed

    Treu, Laura; Kougias, Panagiotis G; Campanaro, Stefano; Bassani, Ilaria; Angelidaki, Irini

    2016-09-01

    This research aimed to better characterize the biogas microbiome by means of high throughput metagenomic sequencing and to elucidate the core microbial consortium existing in biogas reactors independently from the operational conditions. Assembly of shotgun reads followed by an established binning strategy resulted in the highest, up to now, extraction of microbial genomes involved in biogas producing systems. From the 236 extracted genome bins, it was remarkably found that the vast majority of them could only be characterized at high taxonomic levels. This result confirms that the biogas microbiome is comprised by a consortium of unknown species. A comparative analysis between the genome bins of the current study and those extracted from a previous metagenomic assembly demonstrated a similar phylogenetic distribution of the main taxa. Finally, this analysis led to the identification of a subset of common microbes that could be considered as the core essential group in biogas production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Medical Physics Residency Consortium: collaborative endeavors to meet the ABR 2014 certification requirements.

    PubMed

    Parker, Brent C; Duhon, John; Yang, Claus C; Wu, H Terry; Hogstrom, Kenneth R; Gibbons, John P

    2014-03-06

    In 2009, Mary Bird Perkins Cancer Center (MBPCC) established a Radiation Oncology Physics Residency Program to provide opportunities for medical physics residency training to MS and PhD graduates of the CAMPEP-accredited Louisiana State University (LSU)-MBPCC Medical Physics Graduate Program. The LSU-MBPCC Program graduates approximately six students yearly, which equates to a need for up to twelve residency positions in a two-year program. To address this need for residency positions, MBPCC has expanded its Program by developing a Consortium consisting of partnerships with medical physics groups located at other nearby clinical institutions. The consortium model offers the residents exposure to a broader range of procedures, technology, and faculty than available at the individual institutions. The Consortium institutions have shown a great deal of support from their medical physics groups and administrations in developing these partnerships. Details of these partnerships are specified within affiliation agreements between MBPCC and each participating institution. All partner sites began resident training in 2011. The Consortium is a network of for-profit, nonprofit, academic, community, and private entities. We feel that these types of collaborative endeavors will be required nationally to reach the number of residency positions needed to meet the 2014 ABR certification requirements and to maintain graduate medical physics training programs.

  10. Assembly: a resource for assembled genomes at NCBI

    PubMed Central

    Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi

    2016-01-01

    The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580

  11. Oak woodlands and forests fire consortium: A regional view of fire science sharing

    USGS Publications Warehouse

    Grabner, Keith W.; Stambaugh, Michael C.; Marschall, Joseph M.; Abadir, Erin R.

    2013-01-01

    The Joint Fire Science Program established 14 regional fire science knowledge exchange consortia to improve the delivery of fire science information and communication among fire managers and researchers. Consortia were developed regionally to ensure that fire science information is tailored to meet regional needs. In this paper, emphasis was placed on the Oak Woodlands and Forests Fire Consortium to provide an inside view of how one regional consortium is organized and its experiences in sharing fire science through various social media, conference, and workshop-based fire science events.

  12. Grid Modernization Laboratory Consortium - Testing and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Skare, Paul; Pratt, Rob

    This paper highlights some of the unique testing capabilities and projects being performed at several national laboratories as part of the U. S. Department of Energy Grid Modernization Laboratory Consortium. As part of this effort, the Grid Modernization Laboratory Consortium Testing Network isbeing developed to accelerate grid modernization by enablingaccess to a comprehensive testing infrastructure and creating a repository of validated models and simulation tools that will be publicly available. This work is key to accelerating thedevelopment, validation, standardization, adoption, and deployment of new grid technologies to help meet U. S. energy goals.

  13. The role of expert searching in the Family Physicians' Inquiries Network (FPIN)*

    PubMed Central

    Ward, Deborah; Meadows, Susan E.; Nashelsky, Joan E.

    2005-01-01

    Objective: This article describes the contributions of medical librarians, as members of the Family Physicians' Inquiries Network (FPIN), to the creation of a database of clinical questions and answers that allows family physicians to practice evidence-based medicine using high-quality information at the point of care. The medical librarians have contributed their evidence-based search expertise and knowledge of information systems that support the processes and output of the consortium. Methods: Since its inception, librarians have been included as valued members of the FPIN community. FPIN recognizes the search expertise of librarians, and each FPIN librarian must meet qualifications demonstrating appropriate experience and training in evidence-based medicine. The consortium works collaboratively to produce the Clinical Inquiries series published in family medicine publications. Results: Over 170 Clinical Inquiries have appeared in Journal of Family Practice (JFP) and American Family Physician (AFP). Surveys have shown that this series has become the most widely read part of the JFP Website. As a result, FPIN has formalized specific librarian roles that have helped build the organizational infrastructure. Conclusions: All of the activities of the consortium are highly collaborative, and the librarian community reflects that. The FPIN librarians are valuable and equal contributors to the process of creating, updating, and maintaining high-quality clinical information for practicing primary care physicians. Of particular value is the skill of expert searching that the librarians bring to FPIN's products. PMID:15685280

  14. Development of Pain Endpoint Models for Use in Prostate Cancer Clinical Trials and Drug Approval

    DTIC Science & Technology

    2016-10-01

    consensus meeting, with input from investigators in the Prostate Cancer Clinical Trials Consortium, FDA Office of Oncology Drug Products, FDA Study...Cancer Clinical Trials Consortium, FDA Office of Oncology Drug Products, FDA Study Endpoint and Label Development Team, and FDA Division of...Abstract. American Society of Clinical Oncology . Chicago IL, June 1-5, 2013. INVENTIONS, PATENTS AND LICENSES None 11 REPORTABLE OUTCOMES

  15. Optoelectronic Technology Consortium: Precompetitive Consortium for Optoelectronic Interconnect Technology

    DTIC Science & Technology

    1992-09-01

    demonstrating the producibility of optoelectronic components for high-density/high-data-rate processors and accelerating the insertion of this technology...technology development stage, OETC will advance the development of optical components, produce links for a multiboard processor testbed demonstration, and...components that are affordable, initially at <$100 per line, and reliable, with a li~e BER᝺-15 and MTTF >10 6 hours. Under the OETC program, Honeywell will

  16. Structuring intuition with theory: The high-throughput way

    NASA Astrophysics Data System (ADS)

    Fornari, Marco

    2015-03-01

    First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.

  17. Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.

    PubMed

    Joseph, Jacob R; Smith, Brandon W; Park, Paul

    2017-01-01

    National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Maryland Family Support Services Consortium. Final Report.

    ERIC Educational Resources Information Center

    Gardner, James F.; Markowitz, Ricka Keeney

    The Maryland Family Support Services Consortium is a 3-year demonstration project which developed unique family support models at five sites serving the needs of families with a developmentally disabled child (ages birth to 21). Caseworkers provided direct intensive services to 224 families over the 3-year period, including counseling, liaison and…

  19. 76 FR 20690 - International Consortium of Orthopedic Registries; Public Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-13

    ... orthopedic implant information and create a research network to advance the methodology and conduct of research related to orthopedic device performance. Date and Time: The public workshop will be held on May 9... discussion among FDA and international orthopedic registries and develop a research consortium (ICOR) that...

  20. Numerate Intends to Join ATOM Consortium to Rapidly Accelerate Preclinical Drug Development | Frederick National Laboratory for Cancer Research

    Cancer.gov

    SAN FRANCISCO – Computational drug design company Numerate has signed a letter of intent to join an open consortium of scientists staffed from two U.S. national laboratories, industry, and academia working to transform drug discovery and developmen

  1. Training a New Generation of Biostatisticians: A Successful Consortium Model

    ERIC Educational Resources Information Center

    Simpson, Judy M.; Ryan, Philip; Carlin, John B.; Gurrin, Lyle; Marschner, Ian

    2009-01-01

    In response to the worldwide shortage of biostatisticians, Australia has established a national consortium of eight universities to develop and deliver a Masters program in biostatistics. This article describes our successful innovative multi-institutional training model, which may be of value to other countries. We first present the issues…

  2. Assessing Community College Student Learning Outcomes: Where Are We? What's Next?

    ERIC Educational Resources Information Center

    Syed, Syraj; Mojock, Charles R.

    2008-01-01

    The Community College Leadership Consortium is building the Bellwether Coalition for Instructional Leadership toward the development of a voluntary system of accountability for community colleges. In response to the Spellings Commission's national call for accountability in higher education, the consortium and its partners convened an Independent…

  3. 75 FR 57900 - FY 2010 Gulf Oil Spill Supplemental Federal Funding Opportunity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-23

    ..., or a consortium of political subdivisions; (4) institution of higher education or a consortium of institutions of higher education; or (5) public or private non-profit organization or association acting in... DEPARTMENT OF COMMERCE Economic Development Administration [Docket No. 100908439-0439-01] FY 2010...

  4. The Comprehensive Career Education System: System Administrators Component K-12.

    ERIC Educational Resources Information Center

    Educational Properties Inc., Irvine, CA.

    Using the example of a Career Education Model developed by the Orange County, California Consortium, the document provides guidelines for setting up career education programs in local educational agencies. Component levels, a definition of career education, and Consortium program background are discussed. Subsequent chapters include: Program…

  5. Demonstration of Next-Generation PEM CHP Systems for Global Markets Using PBI Membrane Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, John; Fritz Intwala, Katrina

    Plug Power and BASF have conducted eight years of development work prior to this project, demonstrating the potential of PBI membranes to exceed many DOE technical targets. This project consisted of; 1.The development of a worldwide system architecture; 2.Stack and balance of plant module development; 3.Development of an improved, lower cost MEA electrode; 4.Receipt of an improved MEA from the EU consortium; 5.Integration of modules into a system; and 6.Delivery of system to EU consortium for additional integration of technologies and testing.

  6. Leveraging community support for Education and Outreach: The IRIS E&O Program

    NASA Astrophysics Data System (ADS)

    Taber, J.; Hubenthal, M.; Wysession, M. E.

    2009-12-01

    The IRIS E&O Program was initiated 10 years ago, some 15 years after the creation of the IRIS Consortium, as IRIS members increasingly recognized the fundamental need to communicate the results of scientific research more effectively and to attract more students to study Earth science. Since then, IRIS E&O has received core funding through successive 5-year cooperative agreements with NSF, based on proposals submitted by IRIS. While a small fraction of the overall Consortium budget, this consistent funding has allowed the development of strong, long-term elements within the E&O Program, including summer internships, IRIS/USGS museum displays, seismographs in schools, IRIS/SSA Distinguished Lecture series, and professional development for middle school and high school teachers. Reliable funding has allowed us to develop expertise in these areas due to the longevity of the programs and the continuous improvement resulting from ongoing evaluations. Support from Consortium members, including volunteering time and expertise, has been critical for the program, as the Consortium has to continually balance the value of E&O products versus equipment and data services for seismology research. The E&O program also provides service to the Consortium, such as PIs being able to count on and leverage IRIS resources when defining the broader impacts of their own research. The reliable base has made it possible to build on the core elements with focused and innovative proposals, allowing, for example, the expansion of our internship program into a full REU site. Developing collaborative proposals with other groups has been a key strategy where IRIS E&O's long-term viability can be combined with expertise from other organizations to develop new products and services. IRIS can offer to continue to reliably deliver and maintain products after the end of a 2-3 year funding cycle, which can greatly increase the reach of the project. Consortium backing has also allowed us to establish an educational fund in honor of the late John Lahr. This fund, which is comprised of individual donations, is being used to provide seismographs to schools along with professional development and ongoing support from the E&O program. We are also developing a plan for attracting larger private and/or foundation funds for new E&O activities, leveraging the reputation of a long-term program.

  7. Results From the John Glenn Biomedical Engineering Consortium. A Success Story for NASA and Northeast Ohio

    NASA Technical Reports Server (NTRS)

    Nall, Marsha M.; Barna, Gerald J.

    2009-01-01

    The John Glenn Biomedical Engineering Consortium was established by NASA in 2002 to formulate and implement an integrated, interdisciplinary research program to address risks faced by astronauts during long-duration space missions. The consortium is comprised of a preeminent team of Northeast Ohio institutions that include Case Western Reserve University, the Cleveland Clinic, University Hospitals Case Medical Center, The National Center for Space Exploration Research, and the NASA Glenn Research Center. The John Glenn Biomedical Engineering Consortium research is focused on fluid physics and sensor technology that addresses the critical risks to crew health, safety, and performance. Effectively utilizing the unique skills, capabilities and facilities of the consortium members is also of prime importance. Research efforts were initiated with a general call for proposals to the consortium members. The top proposals were selected for funding through a rigorous, peer review process. The review included participation from NASA's Johnson Space Center, which has programmatic responsibility for NASA's Human Research Program. The projects range in scope from delivery of prototype hardware to applied research that enables future development of advanced technology devices. All of the projects selected for funding have been completed and the results are summarized. Because of the success of the consortium, the member institutions have extended the original agreement to continue this highly effective research collaboration through 2011.

  8. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  9. Introducing the PRIDE Archive RESTful web services.

    PubMed

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

    NASA Astrophysics Data System (ADS)

    Hancock, Matthew C.; Magnan, Jerry F.

    2017-03-01

    To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  11. Developing a statewide nursing consortium, island style.

    PubMed

    Magnussen, Lois; Niederhauser, Victoria; Ono, Charlene K; Johnson, Nancy Katherine; Vogler, Joyce; Ceria-Ulep, Clementina D

    2013-02-01

    This article describes the transformational changes in the scope and pedagogy of nursing education within a state university system through the development of the Hawaii Statewide Nursing Consortium (HSNC) curriculum. Modeled after the Oregon Consortium for Nursing Education, the HSNC used a community-based participatory approach to develop the curriculum to support all students within the state who are eligible to earn a baccalaureate degree. The curriculum was designed as a long-term solution to the anticipated shortage of nurses to care for Hawaii's diverse population. It is also an effort to increase capacity in schools of nursing by making the best use of resources in the delivery of a baccalaureate curriculum that offers exit opportunities after the completion of an associate degree. Finally, it provides new ways of educating students who will be better prepared to meet Hawaii's health needs. Copyright 2013, SLACK Incorporated.

  12. NLCD 2011 database

    EPA Pesticide Factsheets

    National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).

  13. Significance of genome-wide association studies in molecular anthropology.

    PubMed

    Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal

    2009-12-01

    The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.

  14. Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine.

    PubMed

    Elsik, Christine G; Tayal, Aditi; Diesh, Colin M; Unni, Deepak R; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E

    2016-01-04

    We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Caspian games: A dynamic bargaining game

    NASA Astrophysics Data System (ADS)

    Michaud, Dennis Wright

    This Dissertation was written under the direction of Professor P.Terrence Hopmann. In this work, the author seeks to identify the independent variables affecting the outcome of three key decisions required of the international consortiums constructing Caspian oil export pipelines. The first of involves whether or not the enterprises developing the pipelines to export Kazakh oil, the Caspian Pipeline Consortium ("CPC"), and Azeri oil, the Azerbaijan International Operating Consortium ("CPC"), cooperate by utilizing the same route or utilize separate energy export corridors. Second, I analyzed how the actual Main Export Pipeline route ("MEP") for Azeri oil was selected by the AIOC. Finally, I tried to understand the factors driving the residual equity positions in each consortium. I was particularly interested in the equity position of Russian state and commercial interests in each consortium. I approached the puzzle as a multilevel bargaining problem. Hence, the preferences of each relevant actor (state and corporate levels) were assessed. The covering theory utilized was rational choice. An application of game theoretic modeling, particularly Bayesian analysis (used as a metaphor), accounted for the learning process resulting from the strategic interaction between actors. I sought to understand greater the refinement of each actor's perception of counterpart preferences. Additionally, the Gordon Constant Growth Model ("CGM") and the Sharp's Capital Asset Pricing Model ("CAPM") were utilized to relate multinational actors preferences, achieving a cost of capital based hurdle rate, to political risk. My end findings demonstrate this interrelationship and provide a clear argument for great power states to persuade newly developing Caspian states to adopt a more transparent, and credible approach to corporate governance. This revised state strategy will reduce multinationals' perception of political risk, lower firms' cost of capital (hurdle rate), and increase the funding of major energy development projects, which will stimulate economic and political development.

  16. Boiler materials for ultra supercritical coal power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purgert, Robert; Shingledecker, John; Pschirer, James

    2015-12-29

    The U.S. Department of Energy (DOE) and the Ohio Coal Development Office (OCDO) have undertaken a project aimed at identifying, evaluating, and qualifying the materials needed for the construction of the critical components of coal-fired boilers capable of operating at much higher efficiencies than current generation of supercritical plants. This increased efficiency is expected to be achieved principally through the use of advanced ultrasupercritical (A-USC) steam conditions up to 760°C (1400°F) and 35 MPa (5000 psi). A limiting factor to achieving these higher temperatures and pressures for future A-USC plants are the materials of construction. The goal of this projectmore » is to assess/develop materials technology to build and operate an A-USC boiler capable of delivering steam with conditions up to 760°C (1400°F)/35 MPa (5000 psi). The project has successfully met this goal through a focused long-term public-private consortium partnership. The project was based on an R&D plan developed by the Electric Power Research Institute (EPRI) and an industry consortium that supplemented the recommendations of several DOE workshops on the subject of advanced materials. In view of the variety of skills and expertise required for the successful completion of the proposed work, a consortium led by the Energy Industries of Ohio (EIO) with cost-sharing participation of all the major domestic boiler manufacturers, ALSTOM Power (Alstom), Babcock and Wilcox Power Generation Group, Inc. (B&W), Foster Wheeler (FW), and Riley Power, Inc. (Riley), technical management by EPRI and research conducted by Oak Ridge National Laboratory (ORNL) has been developed. The project has clearly identified and tested materials that can withstand 760°C (1400°F) steam conditions and can also make a 700°C (1300°F) plant more economically attractive. In this project, the maximum temperature capabilities of these and other available high-temperature alloys have been assessed to provide a basis for materials selection and application under a range of conditions prevailing in the boiler. A major effort involving eight tasks was completed in Phase 1. In a subsequent Phase 2 extension, the earlier defined tasks were extended to finish and enhance the Phase 1 activities. This extension included efforts in improved weld/weldment performance, development of longer-term material property databases, additional field (in-plant) corrosion testing, improved understanding of long-term oxidation kinetics and exfoliation, cyclic operation, and fabrication methods for waterwalls. In addition, preliminary work was undertaken to model an oxyfuel boiler to define local environments expected to occur and to study corrosion behavior of alloys under these conditions. This final technical report provides a comprehensive summary of all the work undertaken by the consortium and the research findings from all eight (8) technical tasks including A-USC boiler design and economics (Task 1), long-term materials properties (Task 2), steam- side oxidation (Task 3), Fireside Corrosion (Task 4), Welding (Task 5), Fabricability (Task 6), Coatings (Task 7), and Design Data and Rules (Task 8).« less

  17. Assessment User Guide for Colleges and Universities

    ERIC Educational Resources Information Center

    American Association of Collegiate Registrars and Admissions Officers (AACRAO), 2015

    2015-01-01

    The Smarter Balanced Assessment Consortium is one of two multi-state consortia that have built new assessment systems aligned to the Common Core State Standards. The Smarter Balanced Assessment Consortium is composed of 18 states and the U.S. Virgin Islands that have worked together to develop a comprehensive assessment system aligned to the…

  18. School-to-Work Apprenticeship. Project Manual 1993-1995.

    ERIC Educational Resources Information Center

    Lee Coll., Baytown, TX.

    With 1993-94 and 1994-95 Perkins tech prep funds, Lee College, in cooperation with a consortium and local schools, planned, developed, and validated a school-to-work apprenticeship model for tech prep programs. The other educational partners were the Gulf Coast Tech Prep Consortium and nine high schools in eight area school districts. The…

  19. 75 FR 27992 - Solicitation of Applications for the Research and Evaluation Program: FY 2010 Mapping Regional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ... goods, labor and knowledge. These transformations warrant dramatic shifts in the role of economic... development activities, or a consortium of political subdivisions; an institution of higher education or a consortium of institutions of higher education; and a public or private non-profit organization or...

  20. Cancer Core Europe: a consortium to address the cancer care-cancer research continuum challenge.

    PubMed

    Eggermont, Alexander M M; Caldas, Carlos; Ringborg, Ulrik; Medema, René; Tabernero, Josep; Wiestler, Otmar

    2014-11-01

    European cancer research for a transformative initiative by creating a consortium of six leading excellent comprehensive cancer centres that will work together to address the cancer care-cancer research continuum. Prerequisites for joint translational and clinical research programs are very demanding. These require the creation of a virtual single 'e-hospital' and a powerful translational platform, inter-compatible clinical molecular profiling laboratories with a robust underlying computational biology pipeline, standardised functional and molecular imaging, commonly agreed Standard Operating Procedures (SOPs) for liquid and tissue biopsy procurement, storage and processing, for molecular diagnostics, 'omics', functional genetics, immune-monitoring and other assessments. Importantly also it requires a culture of data collection and data storage that provides complete longitudinal data sets to allow for: effective data sharing and common database building, and to achieve a level of completeness of data that is required for conducting outcome research, taking into account our current understanding of cancers as communities of evolving clones. Cutting edge basic research and technology development serve as an important driving force for innovative translational and clinical studies. Given the excellent track records of the six participants in these areas, Cancer Core Europe will be able to support the full spectrum of research required to address the cancer research- cancer care continuum. Cancer Core Europe also constitutes a unique environment to train the next generation of talents in innovative translational and clinical oncology. Copyright © 2014. Published by Elsevier Ltd.

  1. The MATISSE analysis of large spectral datasets from the ESO Archive

    NASA Astrophysics Data System (ADS)

    Worley, C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Vernisse, Y.; Ordenovic, C.; Bijaoui, A.

    2010-12-01

    The automated stellar classification algorithm, MATISSE, has been developed at the Observatoire de la Côte d'Azur (OCA) in order to determine stellar temperatures, gravities and chemical abundances for large datasets of stellar spectra. The Gaia Data Processing and Analysis Consortium (DPAC) has selected MATISSE as one of the key programmes to be used in the analysis of the Gaia Radial Velocity Spectrometer (RVS) spectra. MATISSE is currently being used to analyse large datasets of spectra from the ESO archive with the primary goal of producing advanced data products to be made available in the ESO database via the Virtual Observatory. This is also an invaluable opportunity to identify and address issues that can be encountered with the analysis large samples of real spectra prior to the launch of Gaia in 2012. The analysis of the archived spectra of the FEROS spectrograph is currently underway and preliminary results are presented.

  2. The MRI-Linear Accelerator Consortium: Evidence-Based Clinical Introduction of an Innovation in Radiation Oncology Connecting Researchers, Methodology, Data Collection, Quality Assurance, and Technical Development.

    PubMed

    Kerkmeijer, Linda G W; Fuller, Clifton D; Verkooijen, Helena M; Verheij, Marcel; Choudhury, Ananya; Harrington, Kevin J; Schultz, Chris; Sahgal, Arjun; Frank, Steven J; Goldwein, Joel; Brown, Kevin J; Minsky, Bruce D; van Vulpen, Marco

    2016-01-01

    An international research consortium has been formed to facilitate evidence-based introduction of MR-guided radiotherapy (MR-linac) and to address how the MR-linac could be used to achieve an optimized radiation treatment approach to improve patients' survival, local, and regional tumor control and quality of life. The present paper describes the organizational structure of the clinical part of the MR-linac consortium. Furthermore, it elucidates why collaboration on this large project is necessary, and how a central data registry program will be implemented.

  3. Northeast Artificial Intelligence Consortium Annual Report 1986. Volume 4. Part A. Hierarchical Region-Based Approach to Automatic Photointerpretation. Part B. Application of AI Techniques to Image Segmentation and Region Identification

    DTIC Science & Technology

    1988-01-01

    MONITORING ORGANIZATION Northeast Artificial (If applicaole)nelincCostum(AcRome Air Development Center (COCU) Inteligence Consortium (NAIC)I 6c. ADDRESS...f, Offell RADC-TR-88-1 1, Vol IV (of eight) Interim Technical ReportS June 1988 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1986...13441-5700 EMENT NO NO NO ACCESSION NO62702F 5 8 71 " " over) I 58 27 13 " ൓ TITLE (Include Security Classification) NORTHEAST ARTIFICIAL INTELLIGENCE

  4. Prader-Willi syndrome and early-onset morbid obesity NIH rare disease consortium: A review of natural history study.

    PubMed

    Butler, Merlin G; Kimonis, Virginia; Dykens, Elisabeth; Gold, June A; Miller, Jennifer; Tamura, Roy; Driscoll, Daniel J

    2018-02-01

    We describe the National Institutes of Health rare disease consortium for Prader-Willi syndrome (PWS) developed to address concerns regarding medical care, diagnosis, growth and development, awareness, and natural history. PWS results from errors in genomic imprinting leading to loss of paternally expressed genes due to 15q11-q13 deletion, maternal disomy 15 or imprinting defects. The 8 year study was conducted at four national sites on individuals with genetically confirmed PWS and early-onset morbid obesity (EMO) with data accumulated to gain a better understanding of the natural history, cause and treatment of PWS. Enrollment of 355 subjects with PWS and 36 subjects with EMO began in September 2006 with study completion in July 2014. Clinical, genetic, cognitive, behavior, and natural history data were systematically collected along with PWS genetic subtypes, pregnancy and birth history, mortality, obesity, and cognitive status with study details as important endpoints in both subject groups. Of the 355 individuals with PWS, 217 (61%) had the 15q11-q13 deletion, 127 (36%) had maternal disomy 15, and 11 (3%) had imprinting defects. Six deaths were reported in our PWS cohort with 598 cumulative years of study exposure and one death in the EMO group with 42 years of exposure. To our knowledge, this description of a longitudinal study in PWS represents the largest and most comprehensive cohort useful for investigators in planning comparable studies in other rare disorders. Ongoing studies utilizing this database should have a direct impact on care and services, diagnosis, treatment, genotype-phenotype correlations, and clinical outcomes in PWS. © 2017 Wiley Periodicals, Inc.

  5. The Children's Hepatic tumors International Collaboration (CHIC): Novel global rare tumor database yields new prognostic factors in hepatoblastoma and becomes a research model

    PubMed Central

    Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka

    2016-01-01

    Introduction Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Methods Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Results Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. Conclusion We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. PMID:26655560

  6. The Children's Hepatic tumors International Collaboration (CHIC): Novel global rare tumor database yields new prognostic factors in hepatoblastoma and becomes a research model.

    PubMed

    Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka

    2016-01-01

    Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥ 8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Medical Physics Residency Consortium: collaborative endeavors to meet the ABR 2014 certification requirements

    PubMed Central

    Parker, Brent C.; Duhon, John; Yang, Claus C.; Wu, H. Terry; Hogstrom, Kenneth R.

    2014-01-01

    In 2009, Mary Bird Perkins Cancer Center (MBPCC) established a Radiation Oncology Physics Residency Program to provide opportunities for medical physics residency training to MS and PhD graduates of the CAMPEP‐accredited Louisiana State University (LSU)‐MBPCC Medical Physics Graduate Program. The LSU‐MBPCC Program graduates approximately six students yearly, which equates to a need for up to twelve residency positions in a two‐year program. To address this need for residency positions, MBPCC has expanded its Program by developing a Consortium consisting of partnerships with medical physics groups located at other nearby clinical institutions. The consortium model offers the residents exposure to a broader range of procedures, technology, and faculty than available at the individual institutions. The Consortium institutions have shown a great deal of support from their medical physics groups and administrations in developing these partnerships. Details of these partnerships are specified within affiliation agreements between MBPCC and each participating institution. All partner sites began resident training in 2011. The Consortium is a network of for‐profit, nonprofit, academic, community, and private entities. We feel that these types of collaborative endeavors will be required nationally to reach the number of residency positions needed to meet the 2014 ABR certification requirements and to maintain graduate medical physics training programs. PACS numbers: 01.40.Fk, 01.40.gb PMID:24710434

  8. Consortium for materials development in space

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The status of the Consortium for Materials Development in Space (CMDS) is reviewed. Individual CMDS materials projects and flight opportunities on suborbital and orbital carriers are outlined. Projects include: surface coatings and catalyst production; non-linear optical organic materials; physical properties of immiscible polymers; nuclear track detectors; powdered metal sintering; iron-carbon solidification; high-temperature superconductors; physical vapor transport crystal growth; materials preparation and longevity in hyperthermal oxygen; foam formation; measurement of the microgravity environment; and commercial management of space fluids.

  9. Experiences with the Application of Services Oriented Approaches to the Federation of Heterogeneous Geologic Data Resources

    NASA Astrophysics Data System (ADS)

    Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.

    2006-12-01

    The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.

  10. Glacier Land Ice Measurements from Space (GLIMS) and the GLIMS Information Management System at NSIDC

    NASA Astrophysics Data System (ADS)

    Machado, A. E.; Scharfen, G. R.; Barry, R. G.; Khalsa, S. S.; Raup, B.; Swick, R.; Troisi, V. J.; Wang, I.

    2001-12-01

    GLIMS (Global Land Ice Measurements from Space) is an international project to survey a majority of the world's glaciers with the accuracy and precision needed to assess recent changes and determine trends in glacial environments. This will be accomplished by: comprehensive periodic satellite measurements, coordinated distribution of screened image data, analysis of images at worldwide Regional Centers, validation of analyses, and a publicly accessible database. The primary data source will be from the ASTER (Advanced Spaceborne Thermal Emission and reflection Radiometer) instrument aboard the EOS Terra spacecraft, and Landsat ETM+ (Enhanced Thematic Mapper Plus), currently in operation. Approximately 700 ASTER images have been acquired with GLIMS gain settings as of mid-2001. GLIMS is a collaborative effort with the United States Geological Survey (USGS), the National Aeronautics Space Adminstration (NASA), other U.S. Federal Agencies and a group of internationally distributed glaciologists at Regional Centers of expertise. The National Snow and Ice Data Center (NSIDC) is developing the information management system for GLIMS. We will ingest and maintain GLIMS-analyzed glacier data from Regional Centers and provide access to the data via the World Wide Web. The GLIMS database will include measurements (over time) of glacier length, area, boundaries, topography, surface velocity vectors, and snowline elevation, derived primarily from remote sensing data. The GLIMS information management system at NSIDC will provide an easy to use and widely accessible service for the glaciological community and other users needing information about the world's glaciers. The structure of the international GLIMS consortium, status of database development, sample imagery and derived analyses and user search and order interfaces will be demonstrated. More information on GLIMS is available at: http://www.glims.org/.

  11. Using a relational database to improve mortality and length of stay for a department of surgery: a comparative review of 5200 patients.

    PubMed

    Ang, Darwin N; Behrns, Kevin E

    2013-07-01

    The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.

  12. Surmounting the unique challenges in health disparities education: a multi-institution qualitative study.

    PubMed

    Carter-Pokras, Olivia; Bereknyei, Sylvia; Lie, Desiree; Braddock, Clarence H

    2010-05-01

    The National Consortium for Multicultural Education for Health Professionals (Consortium) comprises educators representing 18 US medical schools, funded by the National Institutes of Health. Collective lessons learned from curriculum implementation by principal investigators (PIs) have the potential to guide similar educational endeavors. Describe Consortium PI's self-reported challenges with curricular development, solutions and their new curricular products. Information was collected from PIs over 2 months using a 53-question structured three-part questionnaire. The questionnaire addressed PI demographics, curriculum implementation challenges and solutions, and newly created curricular products. Study participants were 18 Consortium PIs. Descriptive analysis was used for quantitative data. Narrative responses were analyzed and interpreted using qualitative thematic coding. Response rate was 100%. Common barriers and challenges identified by PIs were: finding administrative and leadership support, sustaining the momentum, continued funding, finding curricular space, accessing and engaging communities, and lack of education research methodology skills. Solutions identified included engaging stakeholders, project-sharing across schools, advocacy and active participation in committees and community, and seeking sustainable funding. All Consortium PIs reported new curricular products and extensive dissemination efforts outside their own institutions. The Consortium model has added benefits for curricular innovation and dissemination for cultural competence education to address health disparities. Lessons learned may be applicable to other educational innovation efforts.

  13. The RECONS 25 Parsec Database: Who Are the Stars? Where Are the Planets?

    NASA Astrophysics Data System (ADS)

    Henry, Todd J.; Dieterich, S.; Hosey, A. D.; Ianna, P. A.; Jao, W.; Koerner, D. W.; Riedel, A. R.; Slatten, K. J.; Subasavage, J.; Winters, J. G.; RECONS

    2013-01-01

    Since 1994, RECONS (www.recons.org, REsearch Consortium On Nearby Stars) has been discovering and characterizing the Sun's neighbors. Nearby stars provide increased fluxes, larger astrometric perturbations, and higher probabilities for eventual resolution and detailed study of planets than similar stars at larger distances. Examination of the nearby stellar sample will reveal the prevalence and structure of solar systems, as well as the balance of Jovian and terrestrial worlds. These are the stars and planets that will ultimately be key in our search for life elsewhere. Here we outline what we know ... and what we don't know ... about the population of the nearest stars. We have expanded the original RECONS 10 pc horizon to 25 pc and are constructing a database that currently includes 2124 systems. By using the CTIO 0.9m telescope --- now operated by RECONS as part of the SMARTS Consortium --- we have published the first accurate parallaxes for 149 systems within 25 pc and currently have an additional 213 unpublished systems to add. Still, we predict that roughly two-thirds of the systems within 25 pc do not yet have accurate distance measurements. In addition to revealing the Sun's stellar neighbors, we have been using astrometric techniques to search for massive planets orbiting roughly 200 of the nearest red dwarfs. Unlike radial velocity searches, our astrometric effort is most sensitive to Jovian planets in Jovian orbits, i.e. those that span decades. We have now been monitoring stars for up to 13 years with positional accuracies of a few milliarcseconds per night. We have detected stellar and brown dwarf companions, as well as enigmatic, unseen secondaries, but have yet to reveal a single super-Jupiter ... a somewhat surprising result. In total, only 3% of stars within 25 pc are known to possess planets. It seems clear that we have a great deal of work to do to map out the stars, planets, and perhaps life in the solar neighborhood. This effort is supported by the NSF through grant AST-0908402 and via observations made possible by the SMARTS Consortium.

  14. National Institute on Alcohol Abuse and Alcoholism and the study of fetal alcohol spectrum disorders. The International Consortium.

    PubMed

    Calhoun, Faye; Attilia, Maria Luisa; Spagnolo, Primavera Alessandra; Rotondo, Claudia; Mancinelli, Rosanna; Ceccanti, Mauro

    2006-01-01

    Fetal alcohol syndrome (FAS) is a large and rapidly increasing public health problem worldwide. Aside the full-blown FAS, multiple terms are used to describe the continuum of effects that result from prenatal exposure to alcohol, including the whole fetal alcohol spectrum disorders (FASD). The revised Institute of Medicine (IOM) Diagnostic Classification System and the diagnostic criteria for FAS and FASD are reported, as well as the formation of the four-state FAS International Consortium and its aims, as the development of an information base that systematizes data collection that helps to determine at-high-risk populations, and to implement and test a scientific-based prevention/intervention model for at risk women. The Consortium was further enlarged, with the inclusion of some more states (including Italy), leading to the formation of the International Consortium for the Investigation of FASD. The objectives of the Consortium are reported, as well as its previous activities, the South Africa and Italy Projects (active case ascertainment initiatives), and its future activities.

  15. Accrediting osteopathic postdoctoral training institutions.

    PubMed

    Duffy, Thomas

    2011-04-01

    All postdoctoral training programs approved by the American Osteopathic Association are required to be part of an Osteopathic Postdoctoral Training Institution (OPTI) consortium. The author reviews recent activities related to OPTI operations, including the transfer the OPTI Annual Report to an electronic database, revisions to the OPTI Accreditation Handbook, training at the 2010 OPTI Workshop, and new requirements of the American Osteopathic Association Commission on Osteopathic College Accreditation. The author also reviews the OPTI accreditation process, cites common commendations and deficiencies for reviews completed from 2008 to 2010, and provides an overview of plans for future improvements.

  16. The Multi-Resolution Land Characteristics (MRLC) Consortium - 20 Years of Development and Integration of U.S. National Land Cover Data

    EPA Science Inventory

    The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...

  17. A Critical Analysis of a New Model for Occupational Therapy Education: Its Applicability for Other Occupations and Systems.

    ERIC Educational Resources Information Center

    National Committee on Employment of Youth, New York, NY.

    The symposium report focuses on an upgrading program (designed by the Consortium for Occupational Therapy Education) to develop alternate routes to credentialled education and training, resulting in opening up occupational therapy career opportunities to young people. The consortium was composed of four New York State hospitals, two academic…

  18. A NASA/University/Industry Consortium for Research on Aircraft Ice Protection

    NASA Technical Reports Server (NTRS)

    Zumwalt, Glen W.

    1989-01-01

    From 1982 through 1987, an unique consortium was functioning which involved government (NASA), academia (Wichita State Univ.) and twelve industries. The purpose was the development of a better ice protection systems for aircraft. The circumstances which brought about this activity are described, the formation and operation recounted, and the effectiveness of the ventue evaluated.

  19. Analysis of the Gulf Coast Consortium Student Perceptions of College Services Spring 2001 Survey.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    This is a report from Austin Community College (Texas) on a student satisfaction survey developed and administered by the Gulf Coast Consortium of Institutional Research (GCAIR). The survey includes student response data from four community colleges: Austin, Houston, North Harris Montgomery, and San Jacinto. A total of 3,267 students responded to…

  20. Eisenhower Pre-Service Teacher Education Project, Higher Education Consortium Region III. Final Report.

    ERIC Educational Resources Information Center

    Wozniak, Jacci

    The Eisenhower Pre-Service Teacher Education Project was developed by the University of Central Florida, the five community colleges in Region III of the Higher Education Consortium, and the private college and universities in the same region to design curriculum changes to improve the preparation of elementary and secondary math and science…

  1. The Power of Leadership, Collaboration, and Professional Development: The Story of the SMART Consortium

    ERIC Educational Resources Information Center

    Williams, Paul R.; Tabernik, Anna Maria; Krivak, Terry

    2009-01-01

    Few researchers support the belief that a school superintendent can drive improvements in student achievement. The Science and Mathematics Achievement Required for Tomorrow (SMART) Consortium was formed in northeast Ohio in 1998 with the belief that superintendents can have a measurable effect on student learning. The goal of this collaboration…

  2. Approaches and Activities for Engaging Children with Key Ideas in Science

    ERIC Educational Resources Information Center

    Patterson, Pauline

    2015-01-01

    The Cams Hill Science Consortium (CHSC) is a group of teachers based in Hampshire who have been meeting regularly over a number of years to share outcomes from their classroom-based research into engaging children more productively in science. Led by Matthew Newberry, formerly of Cams Hill School in Fareham, the consortium has developed and…

  3. The southern high-resolution modeling consortium - a source for research and operational collaboration

    Treesearch

    Gary L. Achtemeier; Scott L. Goodrick; Yongqiang Liu

    2003-01-01

    The Southern High-Resolution Modeling Consortium (SHRMC) is one of five regional Fire Consortia for Advanced Modeling of Meteorology and Smoke (FCAMMS) consortia established as part of the National Fire Plan. FCAMMS involves research and development activities collaborating across all land management agencies, NOAA, NASA, and Universities. These activities will support...

  4. Job Training for the Homeless Demonstration Program: U.S. Department of Labor--Employment and Training Administration. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Elgin Community Coll., IL.

    This report evaluates the Fox Valley Consortium for Job Training and Placement of the Homeless which involves five educational, social service, and community organizations in activities to facilitate the educational development and financial independence of homeless participants. The consortium consists of: the Community Crisis Center (area…

  5. Project COMPAS [Consortium for Operating and Managing Programs for the Advancment of Skills]: A Design for Change.

    ERIC Educational Resources Information Center

    Schermerhorn, Leora L., Ed.; And Others

    Descriptive and evaluative information is provided in this report on Project COMPAS (Consortium for Operating and Managing Programs for the Advancement of Skills), a cooperative effort between seven community colleges which developed cognitive skills programs for entry-level freshmen. Chapter I reviews the unique features of Project COMPAS,…

  6. A Consortium for Teacher Preparation: Model Guidelines for Small Colleges.

    ERIC Educational Resources Information Center

    Fouts, Jeffrey T.

    Guidelines have been developed for a consortium approach to teacher preparation in a small college that may need some expertise and access to teaching materials which can be provided by local school districts and other agencies. This approach to the design, management, and evaluation of the program offers the opportunity to involve the schools…

  7. The Unwalled Garden: Growth of the OpenCourseWare Consortium, 2001-2008

    ERIC Educational Resources Information Center

    Carson, Steve

    2009-01-01

    This article traces the development of the OpenCourseWare movement, including the origin of the concept at the Massachusetts Institute of Technology (MIT), the implementation of the MIT OpenCourseWare project, and the idea's spread into the global educational community, ultimately resulting in the formation of the OpenCourseWare Consortium. The…

  8. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  9. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  10. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  11. Classification of pulmonary nodules in lung CT images using shape and texture features

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla

    2016-03-01

    Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.

  12. Enhancing AFLOW Visualization using Jmol

    NASA Astrophysics Data System (ADS)

    Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration

    The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings

  13. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    PubMed

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical review of diagnostic and procedure codes. The four distinct methods identifying complication from codified data offer great potential in generating new evidence on the quality and safety of new procedures using routine data. However the most robust method, using the methodology recommended by the NHS Classification Service, was the least frequently used, highlighting that much valuable observational data is being ignored.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl Irwin; Rakesh Gupta; Richard Turton

    The Mid-Atlantic Recycling Center for End-of-Life Electronics (MARCEE) was set up in 1999 in response to a call from Congressman Alan Mollohan, who had a strong interest in this subject. A consortium was put together which included the Polymer Alliance Zone (PAZ) of West Virginia, West Virginia University (WVU), DN American and Ecolibrium. The consortium developed a set of objectives and task plans, which included both the research issues of setting up facilities to demanufacture End-of-Life Electronics (EoLE), the economics of the demanufacturing process, and the infrastructure development necessary for a sustainable recycling industry to be established in West Virginia.more » This report discusses the work of the MARCEE Project Consortium from November 1999 through March 2005. While the body of the report is distributed in hard-copy form the Appendices are being distributed on CD's.« less

  15. Q14 - Standards Development Plan, Ada Interfaces to X Window System, Analysis and Recommendations

    DTIC Science & Technology

    1989-03-20

    portability and reusability. -, . /- , ... ’ 4t. I-,: 2 Introduction Two major thrusts of the STARS program, and industry as a whole, are application...and IEEE, and in industry consortiums to show the directions X is taking and the opportunities for Ada to utilize this work. X is not the only window...and actually prohibit portability, but to avoid this the X developers formed the X Consortium, consisting of industry and academic members, who define

  16. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, M; Robertson, S; Moore, J

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The toolmore » is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.« less

  17. The extent of intestinal failure-associated liver disease in patients referred for intestinal rehabilitation is associated with increased mortality: an analysis of the pediatric intestinal failure consortium database.

    PubMed

    Javid, Patrick J; Oron, Assaf P; Duggan, Christopher; Squires, Robert H; Horslen, Simon P

    2017-09-05

    The advent of regional multidisciplinary intestinal rehabilitation programs has been associated with improved survival in pediatric intestinal failure. Yet, the optimal timing of referral for intestinal rehabilitation remains unknown. We hypothesized that the degree of intestinal failure-associated liver disease (IFALD) at initiation of intestinal rehabilitation would be associated with overall outcome. The multicenter, retrospective Pediatric Intestinal Failure Consortium (PIFCon) database was used to identify all subjects with baseline bilirubin data. Conjugated bilirubin (CBili) was used as a marker for IFALD, and we stratified baseline bilirubin values as CBili<2 mg/dL, CBili 2-4 mg/dL, and CBili>4 mg/dL. The association between baseline CBili and mortality was examined using Cox proportional hazards regression. Of 272 subjects in the database, 191 (70%) children had baseline bilirubin data collected. 38% and 28% of patients had CBili >4 mg/dL and CBili <2 mg/dL, respectively, at baseline. All-cause mortality was 23%. On univariate analysis, mortality was associated with CBili 2-4 mg/dL, CBili >4 mg/dL, prematurity, race, and small bowel atresia. On regression analysis controlling for age, prematurity, and diagnosis, the risk of mortality was increased by 3-fold for baseline CBili 2-4 mg/dL (HR 3.25 [1.07-9.92], p=0.04) and 4-fold for baseline CBili >4 mg/dL (HR 4.24 [1.51-11.92], p=0.006). On secondary analysis, CBili >4 mg/dL at baseline was associated with a lower chance of attaining enteral autonomy. In children with intestinal failure treated at intestinal rehabilitation programs, more advanced IFALD at referral is associated with increased mortality and decreased prospect of attaining enteral autonomy. Early referral of children with intestinal failure to intestinal rehabilitation programs should be strongly encouraged. Treatment Study, Level III. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Round table discussion " Development of qualification framework in meteorology (TEMPUS QUALIMET)"

    NASA Astrophysics Data System (ADS)

    Bashmakova, I.; Belotserkovsky, A.; Karlin, L.; Petrosyan, A.; Serditova, N.; Zilitinkevich, S.

    2010-09-01

    The international consortium has started implementing a project aimed at the development of unified framework of qualifications in meteorology (QualiMet), setting a system of recognition and award of qualifications up to Doctoral level based on standards of knowledge, skill and competence acquired by learners is underway. The QualiMet has the following specific objectives: 1. To develop standards of knowledge, skills and competence for all qualifications up to Doctoral level needed in all possible occupations meteorology learner can undertake, by July 2011 2. To develop reciprocally recognized rubrics, criteria, methods and tools for assessing the compliance with the developed standards (quality assurance), by July 2012 3. To set the network of Centers of Excellence as the primary designer of sample education programs and learning experiences, both in brick-and-mortar and distant setting of delivery, leading to achievement of the developed standards, by December 2012 4. To set a system of mutual international recognition and award of qualifications in meteorology based on the developed procedures and establishment of self-regulatory public organization, by December 2012 The main beneficiaries of the project are: 1. Meteorology learners from the consortium countries. They will be able to make informed decisions about available qualification choices and progression options and provided an opportunity for students and graduates to participate in the system of international continuous education. 2. Meteorology employers from the consortium countries, They will be able to specify the level of knowledge, skill and competence required for occupational roles, evaluate qualifications presented, connect training and development with business needs. 3. Students and academic staff of all the consortium members, who will gain the increased mobility and exchange the fluxes of culturally and institutionally diversified lecturers and qualified specialists

  19. Brain delivery research in public-private partnerships: The IMI-JU COMPACT consortium as an example.

    PubMed

    Meyer, Axel H; Untucht, Christopher; Terstappen, Georg C

    2017-07-01

    The Blood-Brain Barrier (BBB) represents a major hurdle in the development of treatments for CNS disorders due to the fact that it very effectively keeps drugs, especially biological macromolecules, out of the brain. Concomitantly with the increasing importance of biologics research on the BBB and, more specifically, on brain delivery technologies has intensified in recent years. Public-Private Partnerships (PPPs) represent an innovative opportunity to address such complex challenges as they bring together the best expertise from both industry and academia. Here we present the IMI-JU COMPACT (Collaboration on the Optimisation of Macromolecular Pharmaceutical Access to Cellular Targets) consortium working on nanocarriers for targeted delivery of macromolecules as an example. The scope of the consortium, its goals and the expertise within the consortium are outlined. This article is part of the Special Issue entitled "Beyond small molecules for neurological disorders". Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Assessing the response of the Gulf Coast to global change

    NASA Astrophysics Data System (ADS)

    Anderson, John B.; Törnqvist, Torbjörn E.; Day, John

    2012-11-01

    Gulf Coastal Science Consortium Workshop;Houston, Texas, 28-29 June 2012 The newly formed Gulf Coastal Science Consortium held its first workshop at Rice University. The creation of the consortium was prompted by two recent incidents. One incident involved censorship of a book chapter on Galveston Bay by the Texas Commission on Environmental Quality that omitted all references to climate change and accelerated sea-level rise. The other incident was the adoption of legislation in North Carolina that requires planners and developers to assume a linear sea-level rise projection, despite compelling scientific evidence for a multifold increase in sea-level rise in historical time.

  1. NASA Nebraska Space Grant Consortium 1995-1999 Self Evaluation

    NASA Technical Reports Server (NTRS)

    Schaaf, Michaela M.; Bowen, Brent D.; Schaffart, Mary M.

    1999-01-01

    The NASA Nebraska Space Grant Consortium receives funds from NASA to allow Nebraska colleges and universities to implement balanced programs of research, education and public service related to aeronautics, space science and technology. Nebraska is a capability enhancement state which directs efforts and resources toward developing research infrastructure and enhancing the quality of aerospace research and education for all Nebraskans. Furthermore, the Nebraska Space Grant strives to provide national leadership in applied aspects of aeronautics. Nebraska has met, meets and will continue to meet all requirements set forth by NASA. Nebraska is a top-tier consortium and will continue to be a model program.

  2. A multi-institutional approach to delivering shared curricula for developing a next-generation energy workforce

    DOE PAGES

    Holloway, Lawrence E.; Qu, Zhihua; Mohr-Schroeder, Margaret J.; ...

    2017-02-06

    In this study, we consider collaborative power systems education through the FEEDER consortium. To increase students' access to power engineering educational content, the consortium of seven universities was formed. A framework is presented to characterize different collaborative education activities among the universities. Three of these approaches of collaborative educational activities are presented and discussed. These include 1) cross-institutional blended courses ("MS-MD''); 2) cross-institutional distance courses ("SS-MD''); and 3) single-site special experiential courses and concentrated on-site programs available to students across consortium institutions ("MS-SD''). As a result, this paper presents the advantages and disadvantages of each approach.

  3. Decontamination systems information and research program. Quarterly report, April--June 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report contains separate reports on the following subtasks: analysis of the Vortec cyclone melting system for remediation of PCB contaminated soils using CFD; drain enhanced soil flushing using prefabricated vertical drains; performance and characteristics evaluation of acrylates as grout barriers; development of standard test protocol barrier design models for desiccation barriers, and for in-situ formed barriers; in-situ bioremediation of chlorinated solvents at Portsmouth Gaseous Diffusion Plant; development of a decision support system and a prototype database for management of the EM50 technology development program; GIS-based infrastructure for site characterization and remediation; treatment of mixed wastes via fluidized bed steammore » reforming; use of centrifugal membrane technology to treat hazardous/radioactive waste; environmental pollution control devices based on novel forms of carbon; development of instrumental methods for analysis of nuclear wastes and environmental materials; production and testing of biosorbents and cleaning solutions for D and D; use of SpinTek centrifugal membrane and sorbents/cleaning solutions for D and D; West Virginia High Tech Consortium Foundation--Environmental support program; small business interaction opportunities; and approach for assessing potential voluntary environmental protection.« less

  4. Drew/Meharry/Morehouse Consortium Cancer Center: an approach to targeted research in minority institutions.

    PubMed Central

    Haynes, M. A.; Bernard, L. J.

    1992-01-01

    This article describes the process by which three private minority medical schools planned and developed a consortium cancer research center focusing on the prevention of cancer in the African-American population. Several lessons were learned that may have relevance as minority schools search for ways to improve the health status of blacks. PMID:1608062

  5. Trailblazing in East Texas: A Progress Report on the Forest Trail Library Consortium's Networking Project.

    ERIC Educational Resources Information Center

    Claer, Joycelyn H.

    This report describes the development of the Forest Trail Library Consortium (FTLC), a network of academic, public, and school libraries in Texas. The growth of FTLC from 4 charter members to its current 16 members is traced, including details about goals and funding. The role of special interest groups (SIGs) is examined, and the goals and…

  6. Deconstructing Hub Drag. Part 2. Computational Development and Anaysis

    DTIC Science & Technology

    2013-09-30

    leveraged a Vertical Lift Consortium ( VLC )-funded hub drag scaling research effort. To confirm this objective, correlations are performed with the...Technology™ Demonstrator aircraft using an unstructured computational solver. These simpler faired elliptical geome- tries can prove to be challenging ...possible. However, additional funding was obtained from the Vertical Lift Consortium ( VLC ) to perform this study. This analysis is documented in

  7. Common variants in Mendelian kidney disease genes and their association with renal function.

    PubMed

    Parsa, Afshin; Fuchsberger, Christian; Köttgen, Anna; O'Seaghdha, Conall M; Pattaro, Cristian; de Andrade, Mariza; Chasman, Daniel I; Teumer, Alexander; Endlich, Karlhans; Olden, Matthias; Chen, Ming-Huei; Tin, Adrienne; Kim, Young J; Taliun, Daniel; Li, Man; Feitosa, Mary; Gorski, Mathias; Yang, Qiong; Hundertmark, Claudia; Foster, Meredith C; Glazer, Nicole; Isaacs, Aaron; Rao, Madhumathi; Smith, Albert V; O'Connell, Jeffrey R; Struchalin, Maksim; Tanaka, Toshiko; Li, Guo; Hwang, Shih-Jen; Atkinson, Elizabeth J; Lohman, Kurt; Cornelis, Marilyn C; Johansson, Asa; Tönjes, Anke; Dehghan, Abbas; Couraki, Vincent; Holliday, Elizabeth G; Sorice, Rossella; Kutalik, Zoltan; Lehtimäki, Terho; Esko, Tõnu; Deshmukh, Harshal; Ulivi, Sheila; Chu, Audrey Y; Murgia, Federico; Trompet, Stella; Imboden, Medea; Kollerits, Barbara; Pistis, Giorgio; Harris, Tamara B; Launer, Lenore J; Aspelund, Thor; Eiriksdottir, Gudny; Mitchell, Braxton D; Boerwinkle, Eric; Schmidt, Helena; Hofer, Edith; Hu, Frank; Demirkan, Ayse; Oostra, Ben A; Turner, Stephen T; Ding, Jingzhong; Andrews, Jeanette S; Freedman, Barry I; Giulianini, Franco; Koenig, Wolfgang; Illig, Thomas; Döring, Angela; Wichmann, H-Erich; Zgaga, Lina; Zemunik, Tatijana; Boban, Mladen; Minelli, Cosetta; Wheeler, Heather E; Igl, Wilmar; Zaboli, Ghazal; Wild, Sarah H; Wright, Alan F; Campbell, Harry; Ellinghaus, David; Nöthlings, Ute; Jacobs, Gunnar; Biffar, Reiner; Ernst, Florian; Homuth, Georg; Kroemer, Heyo K; Nauck, Matthias; Stracke, Sylvia; Völker, Uwe; Völzke, Henry; Kovacs, Peter; Stumvoll, Michael; Mägi, Reedik; Hofman, Albert; Uitterlinden, Andre G; Rivadeneira, Fernando; Aulchenko, Yurii S; Polasek, Ozren; Hastie, Nick; Vitart, Veronique; Helmer, Catherine; Wang, Jie Jin; Stengel, Bénédicte; Ruggiero, Daniela; Bergmann, Sven; Kähönen, Mika; Viikari, Jorma; Nikopensius, Tiit; Province, Michael; Colhoun, Helen; Doney, Alex; Robino, Antonietta; Krämer, Bernhard K; Portas, Laura; Ford, Ian; Buckley, Brendan M; Adam, Martin; Thun, Gian-Andri; Paulweber, Bernhard; Haun, Margot; Sala, Cinzia; Mitchell, Paul; Ciullo, Marina; Vollenweider, Peter; Raitakari, Olli; Metspalu, Andres; Palmer, Colin; Gasparini, Paolo; Pirastu, Mario; Jukema, J Wouter; Probst-Hensch, Nicole M; Kronenberg, Florian; Toniolo, Daniela; Gudnason, Vilmundur; Shuldiner, Alan R; Coresh, Josef; Schmidt, Reinhold; Ferrucci, Luigi; van Duijn, Cornelia M; Borecki, Ingrid; Kardia, Sharon L R; Liu, Yongmei; Curhan, Gary C; Rudan, Igor; Gyllensten, Ulf; Wilson, James F; Franke, Andre; Pramstaller, Peter P; Rettig, Rainer; Prokopenko, Inga; Witteman, Jacqueline; Hayward, Caroline; Ridker, Paul M; Bochud, Murielle; Heid, Iris M; Siscovick, David S; Fox, Caroline S; Kao, W Linda; Böger, Carsten A

    2013-12-01

    Many common genetic variants identified by genome-wide association studies for complex traits map to genes previously linked to rare inherited Mendelian disorders. A systematic analysis of common single-nucleotide polymorphisms (SNPs) in genes responsible for Mendelian diseases with kidney phenotypes has not been performed. We thus developed a comprehensive database of genes for Mendelian kidney conditions and evaluated the association between common genetic variants within these genes and kidney function in the general population. Using the Online Mendelian Inheritance in Man database, we identified 731 unique disease entries related to specific renal search terms and confirmed a kidney phenotype in 218 of these entries, corresponding to mutations in 258 genes. We interrogated common SNPs (minor allele frequency >5%) within these genes for association with the estimated GFR in 74,354 European-ancestry participants from the CKDGen Consortium. However, the top four candidate SNPs (rs6433115 at LRP2, rs1050700 at TSC1, rs249942 at PALB2, and rs9827843 at ROBO2) did not achieve significance in a stage 2 meta-analysis performed in 56,246 additional independent individuals, indicating that these common SNPs are not associated with estimated GFR. The effect of less common or rare variants in these genes on kidney function in the general population and disease-specific cohorts requires further research.

  8. An electronic infrastructure for research and treatment of the thalassemias and other hemoglobinopathies: the Euro-mediterranean ITHANET project.

    PubMed

    Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina

    2009-01-01

    Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.

  9. Surmounting the Unique Challenges in Health Disparities Education: A Multi-Institution Qualitative Study

    PubMed Central

    Bereknyei, Sylvia; Lie, Desiree; Braddock, Clarence H.

    2010-01-01

    Background The National Consortium for Multicultural Education for Health Professionals (Consortium) comprises educators representing 18 US medical schools, funded by the National Institutes of Health. Collective lessons learned from curriculum implementation by principal investigators (PIs) have the potential to guide similar educational endeavors. Objective Describe Consortium PI’s self-reported challenges with curricular development, solutions and their new curricular products. Methods Information was collected from PIs over 2 months using a 53-question structured three-part questionnaire. The questionnaire addressed PI demographics, curriculum implementation challenges and solutions, and newly created curricular products. Study participants were 18 Consortium PIs. Descriptive analysis was used for quantitative data. Narrative responses were analyzed and interpreted using qualitative thematic coding. Results Response rate was 100%. Common barriers and challenges identified by PIs were: finding administrative and leadership support, sustaining the momentum, continued funding, finding curricular space, accessing and engaging communities, and lack of education research methodology skills. Solutions identified included engaging stakeholders, project-sharing across schools, advocacy and active participation in committees and community, and seeking sustainable funding. All Consortium PIs reported new curricular products and extensive dissemination efforts outside their own institutions. Conclusion The Consortium model has added benefits for curricular innovation and dissemination for cultural competence education to address health disparities. Lessons learned may be applicable to other educational innovation efforts. PMID:20352503

  10. Mineralization and Detoxification of the Carcinogenic Azo Dye Congo Red and Real Textile Effluent by a Polyurethane Foam Immobilized Microbial Consortium in an Upflow Column Bioreactor.

    PubMed

    Lade, Harshad; Govindwar, Sanjay; Paul, Diby

    2015-06-16

    A microbial consortium that is able to grow in wheat bran (WB) medium and decolorize the carcinogenic azo dye Congo red (CR) was developed. The microbial consortium was immobilized on polyurethane foam (PUF). Batch studies with the PUF-immobilized microbial consortium showed complete removal of CR dye (100 mg·L-1) within 12 h at pH 7.5 and temperature 30 ± 0.2 °C under microaerophilic conditions. Additionally, 92% American Dye Manufactureing Institute (ADMI) removal for real textile effluent (RTE, 50%) was also observed within 20 h under the same conditions. An upflow column reactor containing PUF-immobilized microbial consortium achieved 99% CR dye (100 mg·L-1) and 92% ADMI removal of RTE (50%) at 35 and 20 mL·h-l flow rates, respectively. Consequent reduction in TOC (83 and 79%), COD (85 and 83%) and BOD (79 and 78%) of CR dye and RTE were also observed, which suggested mineralization. The decolorization process was traced to be enzymatic as treated samples showed significant induction of oxidoreductive enzymes. The proposed biodegradation pathway of the dye revealed the formation of lower molecular weight compounds. Toxicity studies with a plant bioassay and acute tests indicated that the PUF-immobilized microbial consortium favors detoxification of the dye and textile effluents.

  11. Mineralization and Detoxification of the Carcinogenic Azo Dye Congo Red and Real Textile Effluent by a Polyurethane Foam Immobilized Microbial Consortium in an Upflow Column Bioreactor

    PubMed Central

    Lade, Harshad; Govindwar, Sanjay; Paul, Diby

    2015-01-01

    A microbial consortium that is able to grow in wheat bran (WB) medium and decolorize the carcinogenic azo dye Congo red (CR) was developed. The microbial consortium was immobilized on polyurethane foam (PUF). Batch studies with the PUF-immobilized microbial consortium showed complete removal of CR dye (100 mg·L−1) within 12 h at pH 7.5 and temperature 30 ± 0.2 °C under microaerophilic conditions. Additionally, 92% American Dye Manufactureing Institute (ADMI) removal for real textile effluent (RTE, 50%) was also observed within 20 h under the same conditions. An upflow column reactor containing PUF-immobilized microbial consortium achieved 99% CR dye (100 mg·L−1) and 92% ADMI removal of RTE (50%) at 35 and 20 mL·h−l flow rates, respectively. Consequent reduction in TOC (83 and 79%), COD (85 and 83%) and BOD (79 and 78%) of CR dye and RTE were also observed, which suggested mineralization. The decolorization process was traced to be enzymatic as treated samples showed significant induction of oxidoreductive enzymes. The proposed biodegradation pathway of the dye revealed the formation of lower molecular weight compounds. Toxicity studies with a plant bioassay and acute tests indicated that the PUF-immobilized microbial consortium favors detoxification of the dye and textile effluents. PMID:26086710

  12. WE-E-BRB-11: Riview a Web-Based Viewer for Radiotherapy.

    PubMed

    Apte, A; Wang, Y; Deasy, J

    2012-06-01

    Collaborations involving radiotherapy data collection, such as the recently proposed international radiogenomics consortium, require robust, web-based tools to facilitate reviewing treatment planning information. We present the architecture and prototype characteristics for a web-based radiotherapy viewer. The web-based environment developed in this work consists of the following components: 1) Import of DICOM/RTOG data: CERR was leveraged to import DICOM/RTOG data and to convert to database friendly RT objects. 2) Extraction and Storage of RT objects: The scan and dose distributions were stored as .png files per slice and view plane. The file locations were written to the MySQL database. Structure contours and DVH curves were written to the database as numeric data. 3) Web interfaces to query, retrieve and visualize the RT objects: The Web application was developed using HTML 5 and Ruby on Rails (RoR) technology following the MVC philosophy. The open source ImageMagick library was utilized to overlay scan, dose and structures. The application allows users to (i) QA the treatment plans associated with a study, (ii) Query and Retrieve patients matching anonymized ID and study, (iii) Review up to 4 plans simultaneously in 4 window panes (iv) Plot DVH curves for the selected structures and dose distributions. A subset of data for lung cancer patients was used to prototype the system. Five user accounts were created to have access to this study. The scans, doses, structures and DVHs for 10 patients were made available via the web application. A web-based system to facilitate QA, and support Query, Retrieve and the Visualization of RT data was prototyped. The RIVIEW system was developed using open source and free technology like MySQL and RoR. We plan to extend the RIVIEW system further to be useful in clinical trial data collection, outcomes research, cohort plan review and evaluation. © 2012 American Association of Physicists in Medicine.

  13. HPIDB 2.0: a curated database for host–pathogen interactions

    PubMed Central

    Ammari, Mais G.; Gresham, Cathy R.; McCarthy, Fiona M.; Nanduri, Bindu

    2016-01-01

    Identification and analysis of host–pathogen interactions (HPI) is essential to study infectious diseases. However, HPI data are sparse in existing molecular interaction databases, especially for agricultural host–pathogen systems. Therefore, resources that annotate, predict and display the HPI that underpin infectious diseases are critical for developing novel intervention strategies. HPIDB 2.0 (http://www.agbase.msstate.edu/hpi/main.html) is a resource for HPI data, and contains 45, 238 manually curated entries in the current release. Since the first description of the database in 2010, multiple enhancements to HPIDB data and interface services were made that are described here. Notably, HPIDB 2.0 now provides targeted biocuration of molecular interaction data. As a member of the International Molecular Exchange consortium, annotations provided by HPIDB 2.0 curators meet community standards to provide detailed contextual experimental information and facilitate data sharing. Moreover, HPIDB 2.0 provides access to rapidly available community annotations that capture minimum molecular interaction information to address immediate researcher needs for HPI network analysis. In addition to curation, HPIDB 2.0 integrates HPI from existing external sources and contains tools to infer additional HPI where annotated data are scarce. Compared to other interaction databases, our data collection approach ensures HPIDB 2.0 users access the most comprehensive HPI data from a wide range of pathogens and their hosts (594 pathogen and 70 host species, as of February 2016). Improvements also include enhanced search capacity, addition of Gene Ontology functional information, and implementation of network visualization. The changes made to HPIDB 2.0 content and interface ensure that users, especially agricultural researchers, are able to easily access and analyse high quality, comprehensive HPI data. All HPIDB 2.0 data are updated regularly, are publically available for direct download, and are disseminated to other molecular interaction resources. Database URL: http://www.agbase.msstate.edu/hpi/main.html PMID:27374121

  14. Report on the CEPA activities [Consorcio Educativo para la Proteccion Ambiental/Educational Consortium for Environmental Preservation] [Final report of activities from 1998 to 2002

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cruz, Miriam

    This report compiles the instances of scientific, educational, and institutional cooperation on environmental issues and other activities in which CEPA was engaged during the past five years, and includes several annual reports and meeting summaries. CEPA is a collaborative international consortium that brings together higher education institutions with governmental agencies, research laboratories, and private sector entities. CEPA's mission is to strengthen the technical, professional, and educational environmental infrastructure in the United States and Latin America. The CEPA program includes curriculum development, student exchange, faculty development, and creation of educational materials, joint research, and other cooperative activities. CEPA's goals are accomplishedmore » by actively working with Hispanic-serving institutions of higher education in the United States, in collaboration with institutions of higher education in Latin America and other Consortium members to deliver competitive environmental programs.« less

  15. Biodegradation of bispyribac sodium by a novel bacterial consortium BDAM: Optimization of degradation conditions using response surface methodology.

    PubMed

    Ahmad, Fiaz; Anwar, Samina; Firdous, Sadiqa; Da-Chuan, Yin; Iqbal, Samina

    2018-05-05

    Bispyribac sodium (BS), is a selective, systemic and post emergent herbicide used to eradicate grasses and broad leaf weeds. Extensive use of this herbicide has engendered serious environmental concerns. Hence it is important to develop strategies for bioremediation of BS in a cost effective and environment friendly way. In this study a bacterial consortium named BDAM, comprising three novel isolates Achromobacter xylosoxidans (BD1), Achromobacter pulmonis (BA2), and Ochrobactrum intermedium (BM2), was developed by virtue of its potential for degradation of BS. Different culture conditions (temperature, pH and inoculum size) were optimized for degradation of BS by the consortium BDAM and the mutual interactions of these parameters were analysed using a 2 3 full factorial central composite design (CCD) based on Response Surface Methodology (RSM). The optimal values for temperature, pH and inoculum size were found to be 40 °C, 8 and 0.4 g/L respectively to achieve maximum degradation of BS (85.6%). Moreover, the interactive effects of these parameters were investigated using three dimensional surface plots in terms of maximum fitness function. Importantly, it was concluded that the newly developed consortium is a potential candidate for biodegradation of BS in a safe, cost-effective and environmentally friendly manner. Copyright © 2017. Published by Elsevier B.V.

  16. Data shopping in an open marketplace: Introducing the Ontogrator web application for marking up data using ontologies and browsing using facets.

    PubMed

    Morrison, Norman; Hancock, David; Hirschman, Lynette; Dawyndt, Peter; Verslyppe, Bert; Kyrpides, Nikos; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver; Grethe, Jeff; Booth, Tim; Sterk, Peter; Nenadic, Goran; Field, Dawn

    2011-04-29

    In the future, we hope to see an open and thriving data market in which users can find and select data from a wide range of data providers. In such an open access market, data are products that must be packaged accordingly. Increasingly, eCommerce sellers present heterogeneous product lines to buyers using faceted browsing. Using this approach we have developed the Ontogrator platform, which allows for rapid retrieval of data in a way that would be familiar to any online shopper. Using Knowledge Organization Systems (KOS), especially ontologies, Ontogrator uses text mining to mark up data and faceted browsing to help users navigate, query and retrieve data. Ontogrator offers the potential to impact scientific research in two major ways: 1) by significantly improving the retrieval of relevant information; and 2) by significantly reducing the time required to compose standard database queries and assemble information for further research. Here we present a pilot implementation developed in collaboration with the Genomic Standards Consortium (GSC) that includes content from the StrainInfo, GOLD, CAMERA, Silva and Pubmed databases. This implementation demonstrates the power of ontogration and highlights that the usefulness of this approach is fully dependent on both the quality of data and the KOS (ontologies) used. Ideally, the use and further expansion of this collaborative system will help to surface issues associated with the underlying quality of annotation and could lead to a systematic means for accessing integrated data resources.

  17. Data shopping in an open marketplace: Introducing the Ontogrator web application for marking up data using ontologies and browsing using facets

    PubMed Central

    Morrison, Norman; Hancock, David; Hirschman, Lynette; Dawyndt, Peter; Verslyppe, Bert; Kyrpides, Nikos; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver; Grethe, Jeff; Booth, Tim; Sterk, Peter; Nenadic, Goran; Field, Dawn

    2011-01-01

    In the future, we hope to see an open and thriving data market in which users can find and select data from a wide range of data providers. In such an open access market, data are products that must be packaged accordingly. Increasingly, eCommerce sellers present heterogeneous product lines to buyers using faceted browsing. Using this approach we have developed the Ontogrator platform, which allows for rapid retrieval of data in a way that would be familiar to any online shopper. Using Knowledge Organization Systems (KOS), especially ontologies, Ontogrator uses text mining to mark up data and faceted browsing to help users navigate, query and retrieve data. Ontogrator offers the potential to impact scientific research in two major ways: 1) by significantly improving the retrieval of relevant information; and 2) by significantly reducing the time required to compose standard database queries and assemble information for further research. Here we present a pilot implementation developed in collaboration with the Genomic Standards Consortium (GSC) that includes content from the StrainInfo, GOLD, CAMERA, Silva and Pubmed databases. This implementation demonstrates the power of ontogration and highlights that the usefulness of this approach is fully dependent on both the quality of data and the KOS (ontologies) used. Ideally, the use and further expansion of this collaborative system will help to surface issues associated with the underlying quality of annotation and could lead to a systematic means for accessing integrated data resources. PMID:21677865

  18. Understanding Gulf War Illness: An Integrative Modeling Approach

    DTIC Science & Technology

    2016-10-01

    Southeastern University, 3301 College Avenue, Fort Lauderdale, FL 33314 Centers for Disease Control, NIOSH, 1095 Willowdale Road, Morgantown, WV 26505...SUPPLEMENTARY NOTES 14. ABSTRACT The goal of the GWI consortium is to develop a better understanding of GWI and identify specific disease targets to...find treatments that will address the cause of the disease . The consortium will integrate our clinical understanding of the disease process with

  19. Northeast Artificial Intelligence Consortium (NAIC). Volume 2. Discussing, Using, and Recognizing Plans

    DTIC Science & Technology

    1990-12-01

    knowledge and meta-reasoning. In Proceedings of EP14-85 ("Encontro Portugues de Inteligencia Artificial "), pages 138-154, Oporto, Portugal, 1985. [19] N, J...See reverse) 7. PERFORMING ORGANIZATION NAME(S) AND ADORESS(ES) 8. PERFORMING ORGANIZATION Northeast Artificial Intelligence...ABSTRACTM-2.,-- The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and

  20. The Role of Drosophila Merlin in the Control of Mitosis Exit and Development

    DTIC Science & Technology

    2005-07-01

    Abstract presented to the 2005 CTF International Consortium for the Molecular Biology of NFl, NF2, and Schwannomatosis ). Experiments are in progress...Drosophila Spermatogenesis. Abstract presented to the 2005 CTF International Consortium for the Molecular Biology of NFl, NF2, and Schwannomatosis . We...and Schwannomatosis . By combining bioinformatics and phylogenetic approaches, we demonstrated a monophyletic origin of the merlin proteins with the

  1. Critical Pedagogy--The Practice with Veteran Teachers: The Work of the Eastern Pennsylvania Lead Teacher Consortium. [and] Abandon Ship, Change Course, or Ride It Out: A Reaction to Walker.

    ERIC Educational Resources Information Center

    Walker, Thomas J.; Johnson, Scott D.

    1993-01-01

    The Eastern Pennsylvania Lead Teacher Consortium, a regional network for professional development of vocational teachers, demonstrates that lead teachers' work must be tied to student learning outcomes, ideas and practices must be communicated to building-level staff, and regional consortia need a dedicated funding source. (SK)

  2. Biomass power for rural development. Technical progress report, July 1--September 30, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuhauser, E.

    The focus of the DOE/USDA sponsored biomass power for rural development project is to develop commercial energy crops for power generation by the year 2000. The New York based Salix Consortium project is a multi-partner endeavor, implemented in three stages. Phase-1, Final Design and Project Development, will conclude with the preparation of construction and/or operating permits, feedstock production plans, and contracts ready for signature. Field trials of willow (Salix) have been initiated at several locations in New York (Tully, Lockport, King Ferry, La Fayette, Massena, and Himrod) and co-firing tests are underway at Greenidge Station (NYSEG) and Dunkirk Station (NMPC).more » Phase-2 of the project will focus on scale-up of willow crop acreage, construction of co-firing facilities at Dunkirk Station (NMPC), and final modifications for Greenidge Station. Cofiring willow is also under consideration for GPU`s Seward Station where testing is underway. There will be an evaluation of the energy crop as part of the gasification trials occurring at BED`s McNeill power station. Phase-3 will represent fullscale commercialization of the energy crop and power generation on a sustainable basis. During the third quarter of 1997, much of the Consortium`s effort has focused on outreach activities, continued feedstock development, fuel supply planning, and fuel contract development, and preparation for 1998 scale-up activities. The Consortium also submitted a Phase-1 extension proposal during this period. A few of the more important milestones are outlined below. The fourth quarter of 1997 is expected to be dominated by Phase-II proposal efforts and planning for 1998 activities.« less

  3. Developing a university-workforce partnership to address rural and frontier MCH training needs: the Rocky Mountain Public Health Education Consortium (RMPHEC).

    PubMed

    Taren, Douglas L; Varela, Frances; Dotson, Jo Ann W; Eden, Joan; Egger, Marlene; Harper, John; Johnson, Rhonda; Kennedy, Kathy; Kent, Helene; Muramoto, Myra; Peacock, Jane C; Roberts, Richard; Sjolander, Sheila; Streeter, Nan; Velarde, Lily; Hill, Anne

    2011-10-01

    The objective of the article is to provide the socio-cultural, political, economic, and geographic conditions that justified a regional effort for training maternal and child health (MCH) professionals in the Rocky Mountain region, describe a historical account of factors that led to the development of the Rocky Mountain Public Health Education Consortium (RMPHEC), and present RMPHEC as a replicable model developed to enhance practice/academic partnerships among state, tribal, and public health agencies and universities to enhance public health capacity and MCH outcomes. This article provides a description of the development of the RMPHEC, the impetus that drove the Consortium's development, the process used to create it, and its management and programs. Beginning in 1997, local, regional, and federal efforts encouraged stronger MCH training and continuing education in the Rocky Mountain Region. By 1998, the RMPHEC was established to respond to the growing needs of MCH professionals in the region by enhancing workforce development through various programs, including the MCH Certificate Program, MCH Institutes, and distance learning products as well as establishing a place for professionals and MCH agencies to discuss new ideas and opportunities for the region. Finally over the last decade local, state, regional, and federal efforts have encouraged a synergy of MCH resources, opportunities, and training within the region because of the health disparities among MCH populations in the region. The RMPHEC was founded to provide training and continuing education to MCH professionals in the region and as a venue to bring regional MCH organizations together to discuss current opportunities and challenges. RMPHEC is a consortium model that can be replicated in other underserved regions, looking to strengthen MCH training and continuing education.

  4. BigMouth: a multi-institutional dental data repository.

    PubMed

    Walji, Muhammad F; Kalenderian, Elsbeth; Stark, Paul C; White, Joel M; Kookal, Krishna K; Phan, Dat; Tran, Duong; Bernstam, Elmer V; Ramoni, Rachel

    2014-01-01

    Few oral health databases are available for research and the advancement of evidence-based dentistry. In this work we developed a centralized data repository derived from electronic health records (EHRs) at four dental schools participating in the Consortium of Oral Health Research and Informatics. A multi-stakeholder committee developed a data governance framework that encouraged data sharing while allowing control of contributed data. We adopted the i2b2 data warehousing platform and mapped data from each institution to a common reference terminology. We realized that dental EHRs urgently need to adopt common terminologies. While all used the same treatment code set, only three of the four sites used a common diagnostic terminology, and there were wide discrepancies in how medical and dental histories were documented. BigMouth was successfully launched in August 2012 with data on 1.1 million patients, and made available to users at the contributing institutions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Challenges and disparities in the application of personalized genomic medicine to populations with African ancestry

    PubMed Central

    Kessler, Michael D.; Yerges-Armstrong, Laura; Taub, Margaret A.; Shetty, Amol C.; Maloney, Kristin; Jeng, Linda Jo Bone; Ruczinski, Ingo; Levin, Albert M.; Williams, L. Keoki; Beaty, Terri H.; Mathias, Rasika A.; Barnes, Kathleen C.; Boorgula, Meher Preethi; Campbell, Monica; Chavan, Sameer; Ford, Jean G.; Foster, Cassandra; Gao, Li; Hansel, Nadia N.; Horowitz, Edward; Huang, Lili; Ortiz, Romina; Potee, Joseph; Rafaels, Nicholas; Scott, Alan F.; Vergara, Candelaria; Gao, Jingjing; Hu, Yijuan; Johnston, Henry Richard; Qin, Zhaohui S.; Padhukasahasram, Badri; Dunston, Georgia M.; Faruque, Mezbah U.; Kenny, Eimear E.; Gietzen, Kimberly; Hansen, Mark; Genuario, Rob; Bullis, Dave; Lawley, Cindy; Deshpande, Aniket; Grus, Wendy E.; Locke, Devin P.; Foreman, Marilyn G.; Avila, Pedro C.; Grammer, Leslie; Kim, Kwang-YounA; Kumar, Rajesh; Schleimer, Robert; Bustamante, Carlos; De La Vega, Francisco M.; Gignoux, Chris R.; Shringarpure, Suyash S.; Musharoff, Shaila; Wojcik, Genevieve; Burchard, Esteban G.; Eng, Celeste; Gourraud, Pierre-Antoine; Hernandez, Ryan D.; Lizee, Antoine; Pino-Yanes, Maria; Torgerson, Dara G.; Szpiech, Zachary A.; Torres, Raul; Nicolae, Dan L.; Ober, Carole; Olopade, Christopher O.; Olopade, Olufunmilayo; Oluwole, Oluwafemi; Arinola, Ganiyu; Song, Wei; Abecasis, Goncalo; Correa, Adolfo; Musani, Solomon; Wilson, James G.; Lange, Leslie A.; Akey, Joshua; Bamshad, Michael; Chong, Jessica; Fu, Wenqing; Nickerson, Deborah; Reiner, Alexander; Hartert, Tina; Ware, Lorraine B.; Bleecker, Eugene; Meyers, Deborah; Ortega, Victor E.; Pissamai, Maul R. N.; Trevor, Maul R. N.; Watson, Harold; Araujo, Maria Ilma; Oliveira, Ricardo Riccio; Caraballo, Luis; Marrugo, Javier; Martinez, Beatriz; Meza, Catherine; Ayestas, Gerardo; Herrera-Paz, Edwin Francisco; Landaverde-Torres, Pamela; Erazo, Said Omar Leiva; Martinez, Rosella; Mayorga, Alvaro; Mayorga, Luis F.; Mejia-Mejia, Delmy-Aracely; Ramos, Hector; Saenz, Allan; Varela, Gloria; Vasquez, Olga Marina; Ferguson, Trevor; Knight-Madden, Jennifer; Samms-Vaughan, Maureen; Wilks, Rainford J.; Adegnika, Akim; Ateba-Ngoa, Ulysse; Yazdanbakhsh, Maria; O'Connor, Timothy D.

    2016-01-01

    To characterize the extent and impact of ancestry-related biases in precision genomic medicine, we use 642 whole-genome sequences from the Consortium on Asthma among African-ancestry Populations in the Americas (CAAPA) project to evaluate typical filters and databases. We find significant correlations between estimated African ancestry proportions and the number of variants per individual in all variant classification sets but one. The source of these correlations is highlighted in more detail by looking at the interaction between filtering criteria and the ClinVar and Human Gene Mutation databases. ClinVar's correlation, representing African ancestry-related bias, has changed over time amidst monthly updates, with the most extreme switch happening between March and April of 2014 (r=0.733 to r=−0.683). We identify 68 SNPs as the major drivers of this change in correlation. As long as ancestry-related bias when using these clinical databases is minimally recognized, the genetics community will face challenges with implementation, interpretation and cost-effectiveness when treating minority populations. PMID:27725664

  7. The Solar Energy Consortium of New York Photovoltaic Research and Development Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Petra M.

    2012-10-15

    Project Objective: To lead New York State to increase its usage of solar electric systems. The expected outcome is that appropriate technologies will be made available which in turn will help to eliminate barriers to solar energy usage in New York State. Background: The Solar Energy Consortium has been created to lead New York State research on solar systems specifically directed at doubling the efficiency, halving the cost and reducing the cost of installation as well as developing unique form factors for the New York City urban environment.

  8. The NIH Roadmap Epigenomics Program data resource

    PubMed Central

    Chadwick, Lisa Helbling

    2012-01-01

    The NIH Roadmap Reference Epigenome Mapping Consortium is developing a community resource of genome-wide epigenetic maps in a broad range of human primary cells and tissues. There are large amounts of data already available, and a number of different options for viewing and analyzing the data. This report will describe key features of the websites where users will find data, protocols and analysis tools developed by the consortium, and provide a perspective on how this unique resource will facilitate and inform human disease research, both immediately and in the future. PMID:22690667

  9. The NIH Roadmap Epigenomics Program data resource.

    PubMed

    Chadwick, Lisa Helbling

    2012-06-01

    The NIH Roadmap Reference Epigenome Mapping Consortium is developing a community resource of genome-wide epigenetic maps in a broad range of human primary cells and tissues. There are large amounts of data already available, and a number of different options for viewing and analyzing the data. This report will describe key features of the websites where users will find data, protocols and analysis tools developed by the consortium, and provide a perspective on how this unique resource will facilitate and inform human disease research, both immediately and in the future.

  10. Promotores As Advocates for Community Improvement: Experiences of the Western States REACH Su Comunidad Consortium.

    PubMed

    Kutcher, Rachel; Moore-Monroy, Martha; Bello, Elizur; Doyle, Seth; Ibarra, Jorge; Kunz, Susan; Munoz, Rocio; Patton-Lopez, Megan; Sharkey, Joseph R; Wilger, Susan; Alfero, Charlie

    2015-01-01

    The REACH Su Comunidad Consortium worked with 10 communities to address disparities in access to healthy food and physical activity opportunities among Hispanic populations through policy, systems, and environmental (PSE) strategies. Community health workers took leadership roles in the implementation of PSE strategies in partnership with local multisector coalitions. This article describes the role of community health workers in PSE change, the technical and professional development support provided to the REACH Su Comunidad Communities, and highlights professional development needs of community health workers engaging in PSE strategies.

  11. Biomedical science journals in the Arab world.

    PubMed

    Tadmouri, Ghazi O

    2004-10-01

    Medieval Arab scientists established the basis of medical practice and gave important attention to the publication of scientific results. At present, modern scientific publishing in the Arab world is in its developmental stage. Arab biomedical journals are less than 300, most of which are published in Egypt, Lebanon, and the Kingdom of Saudi Arabia. Yet, many of these journals do not have on-line access or are indexed in major bibliographic databases. The majority of indexed journals, however, do not have a stable presence in the popular PubMed database and their indexes are discontinued since 2001. The exposure of Arab biomedical journals in international indices undoubtedly plays an important role in improving the scientific quality of these journals. The successful examples discussed in this review encourage us to call for the formation of a consortium of Arab biomedical journal publishers to assist in redressing the balance of the region from biomedical data consumption to data production.

  12. Proteomics data repositories: Providing a safe haven for your data and acting as a springboard for further research

    PubMed Central

    Vizcaíno, Juan Antonio; Foster, Joseph M.; Martens, Lennart

    2010-01-01

    Despite the fact that data deposition is not a generalised fact yet in the field of proteomics, several mass spectrometry (MS) based proteomics repositories are publicly available for the scientific community. The main existing resources are: the Global Proteome Machine Database (GPMDB), PeptideAtlas, the PRoteomics IDEntifications database (PRIDE), Tranche, and NCBI Peptidome. In this review the capabilities of each of these will be described, paying special attention to four key properties: data types stored, applicable data submission strategies, supported formats, and available data mining and visualization tools. Additionally, the data contents from model organisms will be enumerated for each resource. There are other valuable smaller and/or more specialized repositories but they will not be covered in this review. Finally, the concept behind the ProteomeXchange consortium, a collaborative effort among the main resources in the field, will be introduced. PMID:20615486

  13. Urban Climate Change Resilience as a Teaching Tool for a STEM Summer Bridge Program

    NASA Astrophysics Data System (ADS)

    Rosenzweig, B.; Vorosmarty, C. J.; Socha, A.; Corsi, F.

    2015-12-01

    Community colleges have been identified as important gateways for the United States' scientific workforce development. However, students who begin their higher education at community colleges often face barriers to developing the skills needed for higher-level STEM careers, including basic training in mathematics, programming, analytical problem solving, and cross-disciplinary communication. As part of the Business Higher Education Forum's Undergraduate STEM Interventions in Industry (USI2) Consortium, we are developing a summer bridge program for students in STEM fields transferring from community college to senior (4-year) colleges at the City University of New York. Our scientific research on New York City climate change resilience will serve as the foundation for the bridge program curriculum. Students will be introduced to systems thinking and improve their analytical skills through guided problem-solving exercises using the New York City Climate Change Resilience Indicators Database currently being developed by the CUNY Environmental Crossroads Initiative. Students will also be supported in conducting an introductory, independent research project using the database. The interdisciplinary nature of climate change resilience assessment will allow students to explore topics related to their STEM field of interest (i.e. engineering, chemistry, and health science), while working collaboratively across disciplines with their peers. We hope that students that participate in the bridge program will continue with their research projects through their tenure at senior colleges, further enhancing their academic training, while actively contributing to the study of urban climate change resilience. The effectiveness of this approach will be independently evaluated by NORC at the University of Chicago, as well as through internal surveying and long-term tracking of participating student cohorts.

  14. Current nonclinical testing paradigm enables safe entry to First-In-Human clinical trials: The IQ consortium nonclinical to clinical translational database.

    PubMed

    Monticello, Thomas M; Jones, Thomas W; Dambach, Donna M; Potter, David M; Bolt, Michael W; Liu, Maggie; Keller, Douglas A; Hart, Timothy K; Kadambi, Vivek J

    2017-11-01

    The contribution of animal testing in drug development has been widely debated and challenged. An industry-wide nonclinical to clinical translational database was created to determine how safety assessments in animal models translate to First-In-Human clinical risk. The blinded database was composed of 182 molecules and contained animal toxicology data coupled with clinical observations from phase I human studies. Animal and clinical data were categorized by organ system and correlations determined. The 2×2 contingency table (true positive, false positive, true negative, false negative) was used for statistical analysis. Sensitivity was 48% with a 43% positive predictive value (PPV). The nonhuman primate had the strongest performance in predicting adverse effects, especially for gastrointestinal and nervous system categories. When the same target organ was identified in both the rodent and nonrodent, the PPV increased. Specificity was 84% with an 86% negative predictive value (NPV). The beagle dog had the strongest performance in predicting an absence of clinical adverse effects. If no target organ toxicity was observed in either test species, the NPV increased. While nonclinical studies can demonstrate great value in the PPV for certain species and organ categories, the NPV was the stronger predictive performance measure across test species and target organs indicating that an absence of toxicity in animal studies strongly predicts a similar outcome in the clinic. These results support the current regulatory paradigm of animal testing in supporting safe entry to clinical trials and provide context for emerging alternate models. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Phylogenetic characterization of a corrosive consortium isolated from a sour gas pipeline.

    PubMed

    Jan-Roblero, J; Romero, J M; Amaya, M; Le Borgne, S

    2004-06-01

    Biocorrosion is a common problem in oil and gas industry facilities. Characterization of the microbial populations responsible for biocorrosion and the interactions between different microorganisms with metallic surfaces is required in order to implement efficient monitoring and control strategies. Denaturing gradient gel electrophoresis (DGGE) analysis was used to separate PCR products and sequence analysis revealed the bacterial composition of a consortium obtained from a sour gas pipeline in the Gulf of Mexico. Only one species of sulfate-reducing bacteria (SRB) was detected in this consortium. The rest of the population consisted of enteric bacteria with different characteristics and metabolic capabilities potentially related to biocorrosion. Therefore, several types of bacteria may be involved in biocorrosion arising from natural biofilms that develop in industrial facilities. The low abundance of the detected SRB was evidenced by environmental scanning electron microscopy (ESEM). In addition, the localized corrosion of pipeline steel in the presence of the consortium was clearly observed by ESEM after removing the adhered bacteria.

  16. The Development of the Milwaukee Consortium for Hmong Health: Capacity Building Through Direct Community Engagement.

    PubMed

    Sparks, Shannon M; Vang, Pang C

    2015-01-01

    Hmong women experience increased incidence and mortality rates for cervical cancer, yet their cancer risk is often masked by their inclusion within the comparatively low-risk Asian American and Pacific Islander (AAPI) category. Key to this disparity is late stage at diagnosis, a consequence of low rates of screening. This article describes the establishment and community engagement efforts of the Milwaukee Consortium for Hmong Health, established in 2008 to build capacity to investigate and address barriers to screening and cancer care. The Consortium facilitated a series of three community dialogues to explore with community members effective ways to overcome barriers to accessing screening and cancer care. The community dialogues produced a series of six recommendations for action, detailed herein, supported and prioritized by the community. We posit that the integral involvement of the Hmong community from the outset promoted buy-in of ensuing Consortium education and outreach efforts, and helped to ensure fit with community perspectives, needs, and priorities.

  17. Characterization of a microbial consortium capable of rapid and simultaneous dechlorination of 1,1,2,2-tetrachloroethane and chlorinated ethane and ethene intermediates

    USGS Publications Warehouse

    Jones, E.J.P.; Voytek, M.A.; Lorah, M.M.; Kirshtein, J.D.

    2006-01-01

    A study was carried out to develop a culture of microorganisms for bioaugmentation treatment of chlorinated-ethane contaminated groundwater at sites where dechlorination is incomplete or rates are too slow for effective remedation. Mixed cultures capable of dechlorinating chlorinated ethanes and ethenes were enriched from contaminated wetland sediment at Aberdeen Proving Ground (APG) Maryland. The West Branch Consortium (WBC-2) was capable of degrading 1,1,2,2-tetrachloroethane (TeCA), trichloroethylene (TCE), cis and trans 1,2-dichloroethylene (DCE), 1,1,2-trichloroethane (TCA), 1,2-dichloroethane, and vinyl chloride to nonchlorinated end products ethylene and ethane. WBC-2 dechlorinated TeCA, TCA, and cisDCE rapidly and simultaneously. Methanogens in the consortium were members of the class Methanomicrobia, which includes acetoclastic methanogens. The WBC-2 consortium provides opportunities for the in situ bioremediation of sites contaminated with mixtures of chlorinated ethylenes and ethanes.

  18. Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software

    NASA Astrophysics Data System (ADS)

    Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph

    1995-06-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  19. Biogeography of a human oral microbiome at the micron scale

    PubMed Central

    Mark Welch, Jessica L.; Rossetti, Blair J.; Rieken, Christopher W.; Dewhirst, Floyd E.; Borisy, Gary G.

    2016-01-01

    The spatial organization of complex natural microbiomes is critical to understanding the interactions of the individual taxa that comprise a community. Although the revolution in DNA sequencing has provided an abundance of genomic-level information, the biogeography of microbiomes is almost entirely uncharted at the micron scale. Using spectral imaging fluorescence in situ hybridization as guided by metagenomic sequence analysis, we have discovered a distinctive, multigenus consortium in the microbiome of supragingival dental plaque. The consortium consists of a radially arranged, nine-taxon structure organized around cells of filamentous corynebacteria. The consortium ranges in size from a few tens to a few hundreds of microns in radius and is spatially differentiated. Within the structure, individual taxa are localized at the micron scale in ways suggestive of their functional niche in the consortium. For example, anaerobic taxa tend to be in the interior, whereas facultative or obligate aerobes tend to be at the periphery of the consortium. Consumers and producers of certain metabolites, such as lactate, tend to be near each other. Based on our observations and the literature, we propose a model for plaque microbiome development and maintenance consistent with known metabolic, adherence, and environmental considerations. The consortium illustrates how complex structural organization can emerge from the micron-scale interactions of its constituent organisms. The understanding that plaque community organization is an emergent phenomenon offers a perspective that is general in nature and applicable to other microbiomes. PMID:26811460

  20. Residency training in physiatry during a time of change: funding of graduate medical education and other issues.

    PubMed

    DeLisa, J A; Jain, S S; Kirshblum, S

    1998-01-01

    Decision makers at the federal and state level are considering, and some states have enacted, a reduction in total United States residency positions, a shift in emphasis from specialist to generalist training, a need for programs to join together in training consortia to determine local residency position allocation strategy, a reduction in funding of international medical graduates, and a reduction in funding beyond the first certificate or a total of five years. A 5-page, 24-item questionnaire was sent to all physiatry residency training directors. The objective was to discern a descriptive database of physiatry training programs and how their institutions might respond to cuts in graduate medical education funding. Fifty-eight (73%) of the questionnaires were returned. Most training directors believe that their primary mission is to train general physiatrists and, to a much lesser extent, to train subspecialty or research fellows. Directors were asked how they might handle reductions in house staff such as using physician extenders, shifting clinical workload to faculty, hiring additional faculty, and funding physiatry residents from practice plans and endowments. Physiatry has had little experience (29%; 17/58) with voluntary graduate medical education consortiums, but most (67%; 34/58) seem to feel that if a consortium system is mandated, they would favor a local or regional over a national body because they do not believe the specialty has a strong enough national stature. The major barriers to a consortium for graduate medical education allocation were governance, academic, fiscal, bureaucratic, and competition.

  1. Microwave monolithic integrated circuit-related metrology at the National Institute of Standards and Technology

    NASA Astrophysics Data System (ADS)

    Reeve, Gerome; Marks, Roger; Blackburn, David

    1990-12-01

    How the National Institute of Standards and Technology (NIST) interacts with the GaAs community and the Defense Advanced Research Projects Agency microwave monolithic integrated circuit (MMIC) initiative is described. The organization of a joint industry and government laboratory consortium for MMIC-related metrology research is described along with some of the initial technical developments at NIST done in support of the consortium.

  2. Yes, I Can: Action Projects To Resolve Equity Issues in Educational Computing. A Project of ECCO, the Educational Computer Consortium of Ohio.

    ERIC Educational Resources Information Center

    Fredman, Alice, Ed.

    This book presents reports on selected "local action" projects that were developed as part of the Equity in Technology Project, which was inaugurated in 1985 by the Educational Computer Consortium of Ohio (ECCO). The book is organized into three sections, one for each of the populations targeted by the project. An introduction by Alice Fredman…

  3. Effect of thermal cycling ramp rate on CSP assembly reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2001-01-01

    A JPL-led chip scale package consortium of enterprises recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages for a variety of projects. The experience of the consortium in building more than 150 test vehicle assemblies, single and double sided multilayer PWBs, and the environmental test results has now been published as a chip scale package guidelines document.

  4. The pediatric diabetes consortium: improving care of children with type 1 diabetes through collaborative research.

    PubMed

    2010-09-01

    Although there are some interactions between the major pediatric diabetes programs in the United States, there has been no formal, independent structure for collaboration, the sharing of information, and the development of joint research projects that utilize common outcome measures. To fill this unmet clinical and research need, a consortium of seven pediatric diabetes centers in the United States has formed the Pediatric Diabetes Consortium (PDC) through an unrestricted grant from Novo Nordisk, Inc. (Princeton, NJ). This article describes the organizational structure of the PDC and the design of a study of important clinical outcomes in children and adolescents with new-onset, type 1 diabetes mellitus (T1DM). The outcomes study will describe the changes in A1c levels, the frequency of adverse events (diabetic ketoacidosis/severe hypoglycemia), and the frequency and timing of the "honeymoon" phase in newly diagnosed patients with T1DM over the first 12-24 months of the disease and examine the relationship between these clinical outcomes and demographic, socioeconomic, and treatment factors. This project will also allow the Consortium to develop a cohort of youth with T1DM whose clinical course has been well characterized and who wish to participate in future clinical trials and/or contribute to a repository of biological samples.

  5. The FaceBase Consortium: A comprehensive program to facilitate craniofacial research

    PubMed Central

    Hochheiser, Harry; Aronow, Bruce J.; Artinger, Kristin; Beaty, Terri H.; Brinkley, James F.; Chai, Yang; Clouthier, David; Cunningham, Michael L.; Dixon, Michael; Donahue, Leah Rae; Fraser, Scott E.; Hallgrimsson, Benedikt; Iwata, Junichi; Klein, Ophir; Marazita, Mary L.; Murray, Jeffrey C.; Murray, Stephen; de Villena, Fernando Pardo-Manuel; Postlethwait, John; Potter, Steven; Shapiro, Linda; Spritz, Richard; Visel, Axel; Weinberg, Seth M.; Trainor, Paul A.

    2012-01-01

    The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community. PMID:21458441

  6. Enhanced solvent production by metabolic engineering of a twin-clostridial consortium.

    PubMed

    Wen, Zhiqiang; Minton, Nigel P; Zhang, Ying; Li, Qi; Liu, Jinle; Jiang, Yu; Yang, Sheng

    2017-01-01

    The efficient fermentative production of solvents (acetone, n-butanol, and ethanol) from a lignocellulosic feedstock using a single process microorganism has yet to be demonstrated. Herein, we developed a consolidated bioprocessing (CBP) based on a twin-clostridial consortium composed of Clostridium cellulovorans and Clostridium beijerinckii capable of producing cellulosic butanol from alkali-extracted, deshelled corn cobs (AECC). To accomplish this a genetic system was developed for C. cellulovorans and used to knock out the genes encoding acetate kinase (Clocel_1892) and lactate dehydrogenase (Clocel_1533), and to overexpress the gene encoding butyrate kinase (Clocel_3674), thereby pulling carbon flux towards butyrate production. In parallel, to enhance ethanol production, the expression of a putative hydrogenase gene (Clocel_2243) was down-regulated using CRISPR interference (CRISPRi). Simultaneously, genes involved in organic acids reassimilation (ctfAB, cbei_3833/3834) and pentose utilization (xylR, cbei_2385 and xylT, cbei_0109) were engineered in C. beijerinckii to enhance solvent production. The engineered twin-clostridia consortium was shown to decompose 83.2g/L of AECC and produce 22.1g/L of solvents (4.25g/L acetone, 11.5g/L butanol and 6.37g/L ethanol). This titer of acetone-butanol-ethanol (ABE) approximates to that achieved from a starchy feedstock. The developed twin-clostridial consortium serves as a promising platform for ABE fermentation from lignocellulose by CBP. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  7. From data repositories to submission portals: rethinking the role of domain-specific databases in CollecTF.

    PubMed

    Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan

    2016-01-01

    Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.

  8. Activities involving aeronautical, space science, and technology support for minority institutions

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Final Report addressed the activities with which the Interracial Council for Business Opportunity (ICBO) was involved over the past 12 months. ICBO was involved in the design and development of a CARES Student Tracking System Software (CARES). Cares is intended to provide an effective means of maintaining relevant current and historical information on NASA-funded students through a range of educational program initiatives. ICBP was extensively involved in the formation of a minority university consortium amd implementation of collaborative research activities by the consortium as part of NASA's Mission to Planet Earth/Earth Observing System. ICBO was involved in the formation of an HBCU/MI Consortium to facilitate technology transfer efforts to the small and minority business community in their respective regions.

  9. Enhancement of Health Research Capacity in Nigeria through North-South and In-Country Partnerships

    PubMed Central

    Olaleye, David O.; Odaibo, Georgina N.; Carney, Paula; Agbaji, Oche; Sagay, Atiene S.; Muktar, Haruna; Akinyinka, Olusegun O.; Omigbodun, Akinyinka O.; Ogunniyi, Adesola; Gashau, Wadzani; Akanmu, Sulaimon; Ogunsola, Folasade; Chukwuka, Chinwe; Okonkwo, Prosper I.; Meloni, Seema T.; Adewole, Isaac; Kanki, Phyllis J.; Murphy, Robert L.

    2014-01-01

    Research productivity in Sub-Saharan Africa has the potential to affect teaching, student quality, faculty career development, and translational country-relevant research as it has in developed countries. Nigeria is the most populous country in Africa, with an academic infrastructure that includes 129 universities and 45 medical schools; however, despite the size, the country has unacceptably poor health status indicators. To further develop the research infrastructure in Nigeria, faculty and research career development topics were identified within the six Nigerian universities of the nine institutions of the Medical Education Partnership Initiative in Nigeria (MEPIN) consortium. The consortium identified a training model that incorporated multi-institutional “train the trainers” programs at the University of Ibadan, followed by replication at the other MEPIN universities. More than 140 in-country trainers subsequently presented nine courses to more than 1,600 faculty, graduate students, and resident doctors throughout the consortium during the program’s first three years (2011–2013). This model has fostered a new era of collaboration among the major Nigerian research universities, which now have increased capacity for collaborative research initiatives and improved research output. These changes, in turn, have the potential to improve the nation’s health outcomes. PMID:25072590

  10. A priori collaboration in population imaging: The Uniform Neuro-Imaging of Virchow-Robin Spaces Enlargement consortium.

    PubMed

    Adams, Hieab H H; Hilal, Saima; Schwingenschuh, Petra; Wittfeld, Katharina; van der Lee, Sven J; DeCarli, Charles; Vernooij, Meike W; Katschnig-Winter, Petra; Habes, Mohamad; Chen, Christopher; Seshadri, Sudha; van Duijn, Cornelia M; Ikram, M Kamran; Grabe, Hans J; Schmidt, Reinhold; Ikram, M Arfan

    2015-12-01

    Virchow-Robin spaces (VRS), or perivascular spaces, are compartments of interstitial fluid enclosing cerebral blood vessels and are potential imaging markers of various underlying brain pathologies. Despite a growing interest in the study of enlarged VRS, the heterogeneity in rating and quantification methods combined with small sample sizes have so far hampered advancement in the field. The Uniform Neuro-Imaging of Virchow-Robin Spaces Enlargement (UNIVRSE) consortium was established with primary aims to harmonize rating and analysis (www.uconsortium.org). The UNIVRSE consortium brings together 13 (sub)cohorts from five countries, totaling 16,000 subjects and over 25,000 scans. Eight different magnetic resonance imaging protocols were used in the consortium. VRS rating was harmonized using a validated protocol that was developed by the two founding members, with high reliability independent of scanner type, rater experience, or concomitant brain pathology. Initial analyses revealed risk factors for enlarged VRS including increased age, sex, high blood pressure, brain infarcts, and white matter lesions, but this varied by brain region. Early collaborative efforts between cohort studies with respect to data harmonization and joint analyses can advance the field of population (neuro)imaging. The UNIVRSE consortium will focus efforts on other potential correlates of enlarged VRS, including genetics, cognition, stroke, and dementia.

  11. The Afya Bora Fellowship: An Innovative Program Focused on Creating an Interprofessional Network of Leaders in Global Health.

    PubMed

    Green, Wendy M; Farquhar, Carey; Mashalla, Yohana

    2017-09-01

    Most current health professions education programs are focused on the development of clinical skills. As a result, they may not address the complex and interconnected nature of global health. Trainees require relevant clinical, programmatic, and leadership skills to meet the challenges of practicing in an increasingly globalized environment. To develop health care leaders within sub-Saharan Africa, the Afya Bora Consortium developed a one-year fellowship for medical doctors and nurses. Fellows from nine institutions in the United States and sub-Saharan Africa participate in 12 learning modules focused on leadership development and program management. Classroom-based training is augmented with an experiential apprenticeship component. Since 2011, 100 fellows have graduated from the program. During their apprenticeships, fellows developed projects beneficial to their development and to host organizations. The program has developed fellows' skills in leadership, lent expertise to local organizations, and built knowledge in local contexts. Most fellows have returned to their countries of origin, thus building local capacity. U.S.-based fellows examine global health challenges from regional perspectives and learn from sub-Saharan African experts and peers. The Consortium provides ongoing support to alumni through career development awards and alumni network engagement with current and past fellow cohorts. The Consortium expanded from its initial network of five countries to six and continues to seek opportunities for geographical and institutional expansion.

  12. Operation of molten carbonate fuel cells with different biogas sources: A challenging approach for field trials

    NASA Astrophysics Data System (ADS)

    Trogisch, S.; Hoffmann, J.; Daza Bertrand, L.

    In the past years research in the molten carbonate fuel cells (MCFC) area has been focusing its efforts on the utilisation of natural gas as fuel (S. Geitmann, Wasserstoff- & Brennstoffzellen-Projekte, 2002, ISBN 3-8311-3280-1). In order to increase the advantages of this technology, an international consortium has worked on the utilisation of biogas as fuel in MCFC. During the 4 years lasting RTD project EFFECTIVE two different gas upgrading systems have been developed and constructed together with two mobile MCFC test beds which were operated at different locations for approximately 2.000-5.000 h in each run with biogas from different origins and quality. The large variety of test locations has enabled to gather a large database for assessing the effect of the different biogas qualities on the complete system consisting of the upgrading and the fuel cell systems. The findings are challenging. This article also aims at giving an overview of the advantages of using biogas as fuel for fuel cells.

  13. Planning the Human Variome Project: The Spain Report†

    PubMed Central

    Kaput, Jim; Cotton, Richard G. H.; Hardman, Lauren; Al Aqeel, Aida I.; Al-Aama, Jumana Y.; Al-Mulla, Fahd; Aretz, Stefan; Auerbach, Arleen D.; Axton, Myles; Bapat, Bharati; Bernstein, Inge T.; Bhak, Jong; Bleoo, Stacey L.; Blöcker, Helmut; Brenner, Steven E.; Burn, John; Bustamante, Mariona; Calzone, Rita; Cambon-Thomsen, Anne; Cargill, Michele; Carrera, Paola; Cavedon, Lawrence; Cho, Yoon Shin; Chung, Yeun-Jun; Claustres, Mireille; Cutting, Garry; Dalgleish, Raymond; den Dunnen, Johan T.; Díaz, Carlos; Dobrowolski, Steven; dos Santos, M. Rosário N.; Ekong, Rosemary; Flanagan, Simon B.; Flicek, Paul; Furukawa, Yoichi; Genuardi, Maurizio; Ghang, Ho; Golubenko, Maria V.; Greenblatt, Marc S.; Hamosh, Ada; Hancock, John M.; Hardison, Ross; Harrison, Terence M.; Hoffmann, Robert; Horaitis, Rania; Howard, Heather J.; Barash, Carol Isaacson; Izagirre, Neskuts; Jung, Jongsun; Kojima, Toshio; Laradi, Sandrine; Lee, Yeon-Su; Lee, Jong-Young; Gil-da-Silva-Lopes, Vera L.; Macrae, Finlay A.; Maglott, Donna; Marafie, Makia J.; Marsh, Steven G.E.; Matsubara, Yoichi; Messiaen, Ludwine M.; Möslein, Gabriela; Netea, Mihai G.; Norton, Melissa L.; Oefner, Peter J.; Oetting, William S.; O’Leary, James C.; de Ramirez, Ana Maria Oller; Paalman, Mark H.; Parboosingh, Jillian; Patrinos, George P.; Perozzi, Giuditta; Phillips, Ian R.; Povey, Sue; Prasad, Suyash; Qi, Ming; Quin, David J.; Ramesar, Rajkumar S.; Richards, C. Sue; Savige, Judith; Scheible, Dagmar G.; Scott, Rodney J.; Seminara, Daniela; Shephard, Elizabeth A.; Sijmons, Rolf H.; Smith, Timothy D.; Sobrido, María-Jesús; Tanaka, Toshihiro; Tavtigian, Sean V.; Taylor, Graham R.; Teague, Jon; Töpel, Thoralf; Ullman-Cullere, Mollie; Utsunomiya, Joji; van Kranen, Henk J.; Vihinen, Mauno; Watson, Michael; Webb, Elizabeth; Weber, Thomas K.; Yeager, Meredith; Yeom, Young I.; Yim, Seon-Hee; Yoo, Hyang-Sook

    2018-01-01

    The remarkable progress in characterizing the human genome sequence, exemplified by the Human Genome Project and the HapMap Consortium, has led to the perception that knowledge and the tools (e.g., microarrays) are sufficient for many if not most biomedical research efforts. A large amount of data from diverse studies proves this perception inaccurate at best, and at worst, an impediment for further efforts to characterize the variation in the human genome. Since variation in genotype and environment are the fundamental basis to understand phenotypic variability and heritability at the population level, identifying the range of human genetic variation is crucial to the development of personalized nutrition and medicine. The Human Variome Project (HVP; http://www.humanvariomeproject.org/) was proposed initially to systematically collect mutations that cause human disease and create a cyber infrastructure to link locus specific databases (LSDB). We report here the discussions and recommendations from the 2008 HVP planning meeting held in San Feliu de Guixols, Spain, in May 2008. PMID:19306394

  14. Deciphering the mechanisms of developmental disorders: phenotype analysis of embryos from mutant mouse lines

    PubMed Central

    Wilson, Robert; McGuire, Christina; Mohun, Timothy

    2016-01-01

    The Deciphering the Mechanisms of Developmental Disorders (DMDD) consortium is a research programme set up to identify genes in the mouse, which if mutated (or knocked-out) result in embryonic lethality when homozygous, and initiate the study of why disruption of their function has such profound effects on embryo development and survival. The project uses a combination of comprehensive high resolution 3D imaging and tissue histology to identify abnormalities in embryo and placental structures of embryonic lethal lines. The image data we have collected and the phenotypes scored are freely available through the project website (http://dmdd.org.uk). In this article we describe the web interface to the images that allows the embryo data to be viewed at full resolution in different planes, discuss how to search the database for a phenotype, and our approach to organising the data for an embryo and a mutant line so it is easy to comprehend and intuitive to navigate. PMID:26519470

  15. Planning the human variome project: the Spain report.

    PubMed

    Kaput, Jim; Cotton, Richard G H; Hardman, Lauren; Watson, Michael; Al Aqeel, Aida I; Al-Aama, Jumana Y; Al-Mulla, Fahd; Alonso, Santos; Aretz, Stefan; Auerbach, Arleen D; Bapat, Bharati; Bernstein, Inge T; Bhak, Jong; Bleoo, Stacey L; Blöcker, Helmut; Brenner, Steven E; Burn, John; Bustamante, Mariona; Calzone, Rita; Cambon-Thomsen, Anne; Cargill, Michele; Carrera, Paola; Cavedon, Lawrence; Cho, Yoon Shin; Chung, Yeun-Jun; Claustres, Mireille; Cutting, Garry; Dalgleish, Raymond; den Dunnen, Johan T; Díaz, Carlos; Dobrowolski, Steven; dos Santos, M Rosário N; Ekong, Rosemary; Flanagan, Simon B; Flicek, Paul; Furukawa, Yoichi; Genuardi, Maurizio; Ghang, Ho; Golubenko, Maria V; Greenblatt, Marc S; Hamosh, Ada; Hancock, John M; Hardison, Ross; Harrison, Terence M; Hoffmann, Robert; Horaitis, Rania; Howard, Heather J; Barash, Carol Isaacson; Izagirre, Neskuts; Jung, Jongsun; Kojima, Toshio; Laradi, Sandrine; Lee, Yeon-Su; Lee, Jong-Young; Gil-da-Silva-Lopes, Vera L; Macrae, Finlay A; Maglott, Donna; Marafie, Makia J; Marsh, Steven G E; Matsubara, Yoichi; Messiaen, Ludwine M; Möslein, Gabriela; Netea, Mihai G; Norton, Melissa L; Oefner, Peter J; Oetting, William S; O'Leary, James C; de Ramirez, Ana Maria Oller; Paalman, Mark H; Parboosingh, Jillian; Patrinos, George P; Perozzi, Giuditta; Phillips, Ian R; Povey, Sue; Prasad, Suyash; Qi, Ming; Quin, David J; Ramesar, Rajkumar S; Richards, C Sue; Savige, Judith; Scheible, Dagmar G; Scott, Rodney J; Seminara, Daniela; Shephard, Elizabeth A; Sijmons, Rolf H; Smith, Timothy D; Sobrido, María-Jesús; Tanaka, Toshihiro; Tavtigian, Sean V; Taylor, Graham R; Teague, Jon; Töpel, Thoralf; Ullman-Cullere, Mollie; Utsunomiya, Joji; van Kranen, Henk J; Vihinen, Mauno; Webb, Elizabeth; Weber, Thomas K; Yeager, Meredith; Yeom, Young I; Yim, Seon-Hee; Yoo, Hyang-Sook

    2009-04-01

    The remarkable progress in characterizing the human genome sequence, exemplified by the Human Genome Project and the HapMap Consortium, has led to the perception that knowledge and the tools (e.g., microarrays) are sufficient for many if not most biomedical research efforts. A large amount of data from diverse studies proves this perception inaccurate at best, and at worst, an impediment for further efforts to characterize the variation in the human genome. Because variation in genotype and environment are the fundamental basis to understand phenotypic variability and heritability at the population level, identifying the range of human genetic variation is crucial to the development of personalized nutrition and medicine. The Human Variome Project (HVP; http://www.humanvariomeproject.org/) was proposed initially to systematically collect mutations that cause human disease and create a cyber infrastructure to link locus specific databases (LSDB). We report here the discussions and recommendations from the 2008 HVP planning meeting held in San Feliu de Guixols, Spain, in May 2008. (c) 2009 Wiley-Liss, Inc.

  16. The GENCODE exome: sequencing the complete human exome

    PubMed Central

    Coffey, Alison J; Kokocinski, Felix; Calafato, Maria S; Scott, Carol E; Palta, Priit; Drury, Eleanor; Joyce, Christopher J; LeProust, Emily M; Harrow, Jen; Hunt, Sarah; Lehesjoki, Anna-Elina; Turner, Daniel J; Hubbard, Tim J; Palotie, Aarno

    2011-01-01

    Sequencing the coding regions, the exome, of the human genome is one of the major current strategies to identify low frequency and rare variants associated with human disease traits. So far, the most widely used commercial exome capture reagents have mainly targeted the consensus coding sequence (CCDS) database. We report the design of an extended set of targets for capturing the complete human exome, based on annotation from the GENCODE consortium. The extended set covers an additional 5594 genes and 10.3 Mb compared with the current CCDS-based sets. The additional regions include potential disease genes previously inaccessible to exome resequencing studies, such as 43 genes linked to ion channel activity and 70 genes linked to protein kinase activity. In total, the new GENCODE exome set developed here covers 47.9 Mb and performed well in sequence capture experiments. In the sample set used in this study, we identified over 5000 SNP variants more in the GENCODE exome target (24%) than in the CCDS-based exome sequencing. PMID:21364695

  17. 25 CFR 1000.73 - Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 2 2013-04-01 2013-04-01 false Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information from a non-BIA bureau? 1000.73 Section 1000.73 Indians OFFICE OF THE... § 1000.73 Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...

  18. Engineering Ligninolytic Consortium for Bioconversion of Lignocelluloses to Ethanol and Chemicals.

    PubMed

    Bilal, Muhammad; Nawaz, Muhammad Zohaib; Iqbal, Hafiz M N; Hou, Jialin; Mahboob, Shahid; Al-Ghanim, Khalid A; Cheng, Hairong

    2018-01-01

    Rising environmental concerns and recent global scenario of cleaner production and consumption are leading to the design of green industrial processes to produce alternative fuels and chemicals. Although bioethanol is one of the most promising and eco-friendly alternatives to fossil fuels yet its production from food and feed has received much negative criticism. The main objective of this study was to present the noteworthy potentialities of lignocellulosic biomass as an enormous and renewable biological resource. The particular focus was also given on engineering ligninolytic consortium for bioconversion of lignocelluloses to ethanol and chemicals on sustainable and environmentally basis. Herein, an effort has been made to extensively review, analyze and compile salient information related to the topic of interest. Several authentic bibliographic databases including PubMed, Scopus, Elsevier, Springer, Bentham Science and other scientific databases were searched with utmost care, and inclusion/ exclusion criterion was adopted to appraise the quality of retrieved peer-reviewed research literature. Bioethanol production from lignocellulosic biomass can largely satisfy the possible inconsistency of first-generation ethanol since it utilizes inedible lignocellulosic feedstocks, primarily sourced from agriculture and forestry wastes. Two major polysaccharides in lignocellulosic biomass namely, cellulose and hemicellulose constitute a complex lignocellulosic network by connecting with lignin, which is highly recalcitrant to depolymerization. Several attempts have been made to reduce the cost involved in the process through improving the pretreatment process. While, the ligninolytic enzymes of white rot fungi (WRF) including laccase, lignin peroxidase (LiP), and manganese peroxidase (MnP) have appeared as versatile biocatalysts for delignification of several lignocellulosic residues. The first part of the review is mainly focused on engineering ligninolytic consortium. In the second part, WRF and its unique ligninolytic enzyme-based bio-delignification of lignocellulosic biomass, enzymatic hydrolysis, and fermentation of hydrolyzed feedstock are discussed. The metabolic engineering, enzymatic engineering, synthetic biology aspects for ethanol production and platform chemicals production are comprehensively reviewed in the third part. Towards the end information is also given on futuristic viewpoints. In conclusion, given the present unpredicted scenario of energy and fuel crisis accompanied by global warming, lignocellulosic bioethanol holds great promise as an alternative to petroleum. Apart from bioethanol, the simultaneous production of other value-added products may improve the economics of lignocellulosic bioethanol bioconversion process. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Consortium for Bone and Tissue Repair and Regeneration

    DTIC Science & Technology

    2009-10-01

    visualization of angiogenic responses to the test scaffolds. Silicate 13-93 Borosilicate 13-93B1 Borate 13-93B3 H&E Toluidine blue Goldner’s...bringing together life scientists and materials engineers in the development and testing of new biomaterials. The Consortium focuses on the...chambers in rats to assess possible angiogenic responses; • Initiated in vitro tests of covalently bonding a bioadhesive peptide to 13-93 glass scaffolds as

  20. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model

    PubMed Central

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    Purpose The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. Materials and Methods This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R2, Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Results Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R2, 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R2, 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Conclusion Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice. PMID:28253564

  1. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model.

    PubMed

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R 2 , Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R 2 , 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R 2 , 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice.

  2. The learning curve of robot-assisted radical cystectomy: results from the International Robotic Cystectomy Consortium.

    PubMed

    Hayn, Matthew H; Hussain, Abid; Mansour, Ahmed M; Andrews, Paul E; Carpentier, Paul; Castle, Erik; Dasgupta, Prokar; Rimington, Peter; Thomas, Raju; Khan, Shamim; Kibel, Adam; Kim, Hyung; Manoharan, Murugesan; Menon, Mani; Mottrie, Alex; Ornstein, David; Peabody, James; Pruthi, Raj; Palou Redorta, Joan; Richstone, Lee; Schanne, Francis; Stricker, Hans; Wiklund, Peter; Chandrasekhar, Rameela; Wilding, Greg E; Guru, Khurshid A

    2010-08-01

    Robot-assisted radical cystectomy (RARC) has evolved as a minimally invasive alternative to open radical cystectomy for patients with invasive bladder cancer. We sought to define the learning curve for RARC by evaluating results from a multicenter, contemporary, consecutive series of patients who underwent this procedure. Utilizing the International Robotic Cystectomy Consortium database, a prospectively maintained and institutional review board-approved database, we identified 496 patients who underwent RARC by 21 surgeons at 14 institutions from 2003 to 2009. Cut-off points for operative time, lymph node yield (LNY), estimated blood loss (EBL), and margin positivity were identified. Using specifically designed statistical mixed models, we were able to inversely predict the number of patients required for an institution to reach the predetermined cut-off points. Mean operative time was 386 min, mean EBL was 408 ml, and mean LNY was 18. Overall, 34 of 482 patients (7%) had a positive surgical margin (PSM). Using statistical models, it was estimated that 21 patients were required for operative time to reach 6.5h and 8, 20, and 30 patients were required to reach an LNY of 12, 16, and 20, respectively. For all patients, PSM rates of <5% were achieved after 30 patients. For patients with pathologic stage higher than T2, PSM rates of <15% were achieved after 24 patients. RARC is a challenging procedure but is a technique that is reproducible throughout multiple centers. This report helps to define the learning curve for RARC and demonstrates an acceptable level of proficiency by the 30th case for proxy measures of RARC quality. Copyright (c) 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  3. National Land Cover Database 2001 (NLCD01)

    USGS Publications Warehouse

    LaMotte, Andrew E.

    2016-01-01

    This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  4. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 1, Northwest United States: IMPV01_1

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  5. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 2, Northeast United States: CNPY01_2

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  6. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 4, Southeast United States: IMPV01_4

    USGS Publications Warehouse

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  7. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 1, Northwest United States: CNPY01_1

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov

  8. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 2, Northeast United States: IMPV01_2

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  9. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 4, Southeast United States: CNPY01_4

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  10. National Land Cover Database 2001 (NLCD01) Imperviousness Layer Tile 3, Southwest United States: IMPV01_3

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  11. National Land Cover Database 2001 (NLCD01) Tree Canopy Layer Tile 3, Southwest United States: CNPY01_3

    USGS Publications Warehouse

    LaMotte, Andrew E.; Wieczorek, Michael

    2010-01-01

    This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.

  12. Global standardization measurement of cerebral spinal fluid for Alzheimer's disease: an update from the Alzheimer's Association Global Biomarkers Consortium.

    PubMed

    Carrillo, Maria C; Blennow, Kaj; Soares, Holly; Lewczuk, Piotr; Mattsson, Niklas; Oberoi, Pankaj; Umek, Robert; Vandijck, Manu; Salamone, Salvatore; Bittner, Tobias; Shaw, Leslie M; Stephenson, Diane; Bain, Lisa; Zetterberg, Henrik

    2013-03-01

    Recognizing that international collaboration is critical for the acceleration of biomarker standardization efforts and the efficient development of improved diagnosis and therapy, the Alzheimer's Association created the Global Biomarkers Standardization Consortium (GBSC) in 2010. The consortium brings together representatives of academic centers, industry, and the regulatory community with the common goal of developing internationally accepted common reference standards and reference methods for the assessment of cerebrospinal fluid (CSF) amyloid β42 (Aβ42) and tau biomarkers. Such standards are essential to ensure that analytical measurements are reproducible and consistent across multiple laboratories and across multiple kit manufacturers. Analytical harmonization for CSF Aβ42 and tau will help reduce confusion in the AD community regarding the absolute values associated with the clinical interpretation of CSF biomarker results and enable worldwide comparison of CSF biomarker results across AD clinical studies. Copyright © 2013 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  13. Electron donor preference of a reductive dechlorinating consortium

    USGS Publications Warehouse

    Lorah, M.M.; Majcher, E.; Jones, E.; Driedger, G.; Dworatzek, S.; Graves, D.

    2005-01-01

    A wetland sediment-derived microbial consortium was developed by the USGS and propagated in vitro to large quantities by SiREM Laboratory for use in bioaugmentation applications. The consortium had the capacity to completely dechlorinate 1,1,2,2-tetrachloroethene, tetrachloroethylene, trichloroethylene, 1,1,2-trichloroethane, cis- and trans-1,2-dichoroethylene, 1.1-dichloroethylene, 1,2-dichloroethane, vinyl chloride, carbon tetrachloride and chloroform. A suite of electron donors with characteristics useful for bioaugmentation applications was tested. The electron donors included lactate (the donor used during WBC-2 development), ethanol, chitin (Chitorem???), hydrogen releasing compound (HRC???), emulsified vegetable oil (Newman Zone???), and hydrogen gas. Ethanol, lactate, and chitin were particularly effective with respect to stimulating, supporting, and sustaining reductive dechlorination of the broad suite of chemicals that WBC-2 biodegraded. Chitorem??? was the most effective "slow release" electron donor tested. This is an abstract of a paper presented at the Proceedings of the 8th International In Situ and On-Site Bioremediation Symposium (Baltimore, MD 6/6-9/2005).

  14. Geosciences Information Network (GIN): A modular, distributed, interoperable data network for the geosciences

    NASA Astrophysics Data System (ADS)

    Allison, M.; Gundersen, L. C.; Richard, S. M.; Dickinson, T. L.

    2008-12-01

    A coalition of the state geological surveys (AASG), the U.S. Geological Survey (USGS), and partners will receive NSF funding over 3 years under the INTEROP solicitation to start building the Geoscience Information Network (www.geoinformatics.info/gin) a distributed, interoperable data network. The GIN project will develop standardized services to link existing and in-progress components using a few standards and protocols, and work with data providers to implement these services. The key components of this network are 1) catalog system(s) for data discovery; 2) service definitions for interfaces for searching catalogs and accessing resources; 3) shared interchange formats to encode information for transmission (e.g. various XML markup languages); 4) data providers that publish information using standardized services defined by the network; and 5) client applications adapted to use information resources provided by the network. The GIN will integrate and use catalog resources that currently exist or are in development. We are working with the USGS National Geologic Map Database's existing map catalog, with the USGS National Geological and Geophysical Data Preservation Program, which is developing a metadata catalog (National Digital Catalog) for geoscience information resource discovery, and with the GEON catalog. Existing interchange formats will be used, such as GeoSciML, ChemML, and Open Geospatial Consortium sensor, observation and measurement MLs. Client application development will be fostered by collaboration with industry and academic partners. The GIN project will focus on the remaining aspects of the system -- service definitions and assistance to data providers to implement the services and bring content online - and on system integration of the modules. Initial formal collaborators include the OneGeology-Europe consortium of 27 nations that is building a comparable network under the EU INSPIRE initiative, GEON, Earthchem, and GIS software company ESRI. OneGeology-Europe and GIN have agreed to integrate their networks, effectively adopting global standards among geological surveys that are available across the entire field. ESRI is creating a Geology Data Model for ArcGIS software to be compatible with GIN, and other companies are expressing interest in adapting their services, applications, and clients to take advantage of the large data resources planned to become available through GIN.

  15. The Asia Pacific Academic Consortium for Global Public Health and medicine: stabilizing south-south academic collaboration.

    PubMed

    Patrick, Walter K

    2011-09-01

    Developmental strategies over the last 4 decades have generally tended to transfer knowledge and technology along north-south axes as trickle-down theories in development, especially in health knowledge transfers, prevailed. Limited efforts in development assistance for health (DAH) were made to promote south-south cooperation for basic health needs. Globalization with increased educational networks and development health assistance has enhanced the potential for more effective south-south partnerships for health. The stages of development in a consortium and key catalysts in the metamorphosis to a south-south partnership are identified: leadership, resources, expertise, visibility participation, and dynamism of a critical mass of young professionals. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Biomass power for rural development: Phase 2. Technical progress report, April 1--June 30, 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuhauser, E.

    1998-11-01

    The project undertaken by the Salix Consortium is a multi-phased, multi-partner endeavor. Phase-1 focused on initial development and testing of the technology and agreements necessary to demonstrate commercial willow production in Phase-2. The Phase-1 objectives have been successfully completed: preparing final design plans for two utility pulverized coal boilers, developing fuel supply plans for the project, obtaining power production commitments from the power companies for Phase-2, obtaining construction and environmental permits, and developing an experimental strategy for crop production and power generation improvements needed to assure commercial success. The R and D effort also addresses environmental issues pertaining to introductionmore » of the willow energy system. Beyond those Phase-1 requirements the Consortium has already successfully demonstrated cofiring at Greenidge Station and developed the required nursery capacity for acreage scale-up. This past summer 105 acres were prepared in advance for the spring planting in 1998. Having completed the above tasks, the Consortium is well positioned to begin Phase-2. In phase-2 every aspect of willow production and power generation from willow will be demonstrated. The ultimate objective of Phase-2 is to transition the work performed under the Rural Energy for the Future project into a thriving, self-supported energy crop enterprise.« less

  17. Development of an Efficient Bacterial Consortium for the Potential Remediation of Hydrocarbons from Contaminated Sites

    PubMed Central

    Patowary, Kaustuvmani; Patowary, Rupshikha; Kalita, Mohan C.; Deka, Suresh

    2016-01-01

    The intrinsic biodegradability of hydrocarbons and the distribution of proficient degrading microorganisms in the environment are very crucial for the implementation of bioremediation practices. Among others, one of the most favorable methods that can enhance the effectiveness of bioremediation of hydrocarbon-contaminated environment is the application of biosurfactant producing microbes. In the present study, the biodegradation capacities of native bacterial consortia toward total petroleum hydrocarbons (TPH) with special emphasis to poly aromatic hydrocarbons were determined. The purpose of the study was to isolate TPH degrading bacterial strains from various petroleum contaminated soil of Assam, India and develop a robust bacterial consortium for bioremediation of crude oil of this native land. From a total of 23 bacterial isolates obtained from three different hydrocarbons contaminated samples five isolates, namely KS2, PG1, PG5, R1, and R2 were selected as efficient crude oil degraders with respect to their growth on crude oil enriched samples. Isolates KS2, PG1, and R2 are biosurfactant producers and PG5, R1 are non-producers. Fourteen different consortia were designed involving both biosurfactant producing and non-producing isolates. Consortium 10, which comprises two Bacillus strains namely, Bacillus pumilus KS2 and B. cereus R2 (identified by 16s rRNA sequencing) has shown the best result in the desired degradation of crude oil. The consortium showed degradation up to 84.15% of TPH after 5 weeks of incubation, as revealed from gravimetric analysis. FTIR (Fourier transform infrared) and GCMS (Gas chromatography-mass spectrometer) analyses were correlated with gravimetric data which reveals that the consortium has removed a wide range of petroleum hydrocarbons in comparison with abiotic control including different aliphatic and aromatic hydrocarbons. PMID:27471499

  18. International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data.

    PubMed

    Zhang, Junjun; Baran, Joachim; Cros, A; Guberman, Jonathan M; Haider, Syed; Hsu, Jack; Liang, Yong; Rivkin, Elena; Wang, Jianxin; Whitty, Brett; Wong-Erasmus, Marie; Yao, Long; Kasprzyk, Arek

    2011-01-01

    The International Cancer Genome Consortium (ICGC) is a collaborative effort to characterize genomic abnormalities in 50 different cancer types. To make this data available, the ICGC has created the ICGC Data Portal. Powered by the BioMart software, the Data Portal allows each ICGC member institution to manage and maintain its own databases locally, while seamlessly presenting all the data in a single access point for users. The Data Portal currently contains data from 24 cancer projects, including ICGC, The Cancer Genome Atlas (TCGA), Johns Hopkins University, and the Tumor Sequencing Project. It consists of 3478 genomes and 13 cancer types and subtypes. Available open access data types include simple somatic mutations, copy number alterations, structural rearrangements, gene expression, microRNAs, DNA methylation and exon junctions. Additionally, simple germline variations are available as controlled access data. The Data Portal uses a web-based graphical user interface (GUI) to offer researchers multiple ways to quickly and easily search and analyze the available data. The web interface can assist in constructing complicated queries across multiple data sets. Several application programming interfaces are also available for programmatic access. Here we describe the organization, functionality, and capabilities of the ICGC Data Portal.

  19. Improving safety of aircraft engines: a consortium approach

    NASA Astrophysics Data System (ADS)

    Brasche, Lisa J. H.

    1996-11-01

    With over seven million departures per year, air transportation has become not a luxury, but a standard mode of transportation for the United States. A critical aspect of modern air transport is the jet engine, a complex engineered component that has enabled the rapid travel to which we have all become accustomed. One of the enabling technologies for safe air travel is nondestructive evaluation, or NDE, which includes various inspection techniques used to assess the health or integrity of a structure, component, or material. The Engine Titanium Consortium (ETC) was established in 1993 to respond to recommendations made by the Federal Aviation Administration (FAA) Titanium Rotating Components Review Team (TRCRT) for improvements in inspection of engine titanium. Several recent accomplishments of the ETC are detailed in this paper. The objective of the Engine Titanium Consortium is to provide the FAAand the manufacturers with reliable and costeffective new methods and/or improvements in mature methods for detecting cracks, inclusions, and imperfections in titanium. The consortium consists of a team of researchers from academia and industry-namely, Iowa State University, Allied Signal Propulsion Engines, General Electric Aircraft Engines, and Pratt & Whitney Engines-who work together to develop program priorities, organize a program plan, conduct the research, and implement the solutions. The true advantage of the consortium approach is that it brings together the research talents of academia and the engineering talents of industry to tackle a technology-base problem. In bringing industrial competitors together, the consortium ensures that the research results, which have safety implications and result from FAA funds, are shared and become part of the public domain.

  20. Evaluating efforts to diversify the biomedical workforce: the role and function of the Coordination and Evaluation Center of the Diversity Program Consortium.

    PubMed

    McCreath, Heather E; Norris, Keith C; Calderόn, Nancy E; Purnell, Dawn L; Maccalla, Nicole M G; Seeman, Teresa E

    2017-01-01

    The National Institutes of Health (NIH)-funded Diversity Program Consortium (DPC) includes a Coordination and Evaluation Center (CEC) to conduct a longitudinal evaluation of the two signature, national NIH initiatives - the Building Infrastructure Leading to Diversity (BUILD) and the National Research Mentoring Network (NRMN) programs - designed to promote diversity in the NIH-funded biomedical, behavioral, clinical, and social sciences research workforce. Evaluation is central to understanding the impact of the consortium activities. This article reviews the role and function of the CEC and the collaborative processes and achievements critical to establishing empirical evidence regarding the efficacy of federally-funded, quasi-experimental interventions across multiple sites. The integrated DPC evaluation is particularly significant because it is a collaboratively developed Consortium Wide Evaluation Plan and the first hypothesis-driven, large-scale systemic national longitudinal evaluation of training programs in the history of NIH/National Institute of General Medical Sciences. To guide the longitudinal evaluation, the CEC-led literature review defined key indicators at critical training and career transition points - or Hallmarks of Success. The multidimensional, comprehensive evaluation of the impact of the DPC framed by these Hallmarks is described. This evaluation uses both established and newly developed common measures across sites, and rigorous quasi-experimental designs within novel multi-methods (qualitative and quantitative). The CEC also promotes shared learning among Consortium partners through working groups and provides technical assistance to support high-quality process and outcome evaluation internally of each program. Finally, the CEC is responsible for developing high-impact dissemination channels for best practices to inform peer institutions, NIH, and other key national and international stakeholders. A strong longitudinal evaluation across programs allows the summative assessment of outcomes, an understanding of factors common to interventions that do and do not lead to success, and elucidates the processes developed for data collection and management. This will provide a framework for the assessment of other training programs and have national implications in transforming biomedical research training.

  1. LinkedOmics: analyzing multi-omics data within and across 32 cancer types.

    PubMed

    Vasaikar, Suhas V; Straub, Peter; Wang, Jing; Zhang, Bing

    2018-01-04

    The LinkedOmics database contains multi-omics data and clinical data for 32 cancer types and a total of 11 158 patients from The Cancer Genome Atlas (TCGA) project. It is also the first multi-omics database that integrates mass spectrometry (MS)-based global proteomics data generated by the Clinical Proteomic Tumor Analysis Consortium (CPTAC) on selected TCGA tumor samples. In total, LinkedOmics has more than a billion data points. To allow comprehensive analysis of these data, we developed three analysis modules in the LinkedOmics web application. The LinkFinder module allows flexible exploration of associations between a molecular or clinical attribute of interest and all other attributes, providing the opportunity to analyze and visualize associations between billions of attribute pairs for each cancer cohort. The LinkCompare module enables easy comparison of the associations identified by LinkFinder, which is particularly useful in multi-omics and pan-cancer analyses. The LinkInterpreter module transforms identified associations into biological understanding through pathway and network analysis. Using five case studies, we demonstrate that LinkedOmics provides a unique platform for biologists and clinicians to access, analyze and compare cancer multi-omics data within and across tumor types. LinkedOmics is freely available at http://www.linkedomics.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization

    NASA Astrophysics Data System (ADS)

    Daglis, I.; Balasis, G.; Bourdarie, S.; Horne, R.; Khotyaintsev, Y.; Mann, I.; Santolik, O.; Turner, D.; Anastasiadis, A.; Georgiou, M.; Giannakis, O.; Papadimitriou, C.; Ropokis, G.; Sandberg, I.; Angelopoulos, V.; Glauert, S.; Grison, B., Kersten T.; Kolmasova, I.; Lazaro, D.; Mella, M.; Ozeke, L.; Usanova, M.

    2013-09-01

    We present the concept, objectives and expected impact of the MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization) project, which is being implemented by a consortium of seven institutions (five European, one Canadian and one US) with support from the European Community's Seventh Framework Programme. The MAARBLE project employs multi-spacecraft monitoring of the geospace environment, complemented by ground-based monitoring, in order to analyze and assess the physical mechanisms leading to radiation belt particle energization and loss. Particular attention is paid to the role of ULF/VLF waves. A database containing properties of the waves is being created and will be made available to the scientific community. Based on the wave database, a statistical model of the wave activity dependent on the level of geomagnetic activity, solar wind forcing, and magnetospheric region will be developed. Multi-spacecraft particle measurements will be incorporated into data assimilation tools, leading to new understanding of the causal relationships between ULF/VLF waves and radiation belt dynamics. Data assimilation techniques have been proven as a valuable tool in the field of radiation belts, able to guide 'the best' estimate of the state of a complex system. The MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Energization and Loss) collaborative research project has received funding from the European Union’s Seventh Framework Programme (FP7-SPACE-2011-1) under grant agreement no. 284520.

  3. Common Variants in Mendelian Kidney Disease Genes and Their Association with Renal Function

    PubMed Central

    Fuchsberger, Christian; Köttgen, Anna; O’Seaghdha, Conall M.; Pattaro, Cristian; de Andrade, Mariza; Chasman, Daniel I.; Teumer, Alexander; Endlich, Karlhans; Olden, Matthias; Chen, Ming-Huei; Tin, Adrienne; Kim, Young J.; Taliun, Daniel; Li, Man; Feitosa, Mary; Gorski, Mathias; Yang, Qiong; Hundertmark, Claudia; Foster, Meredith C.; Glazer, Nicole; Isaacs, Aaron; Rao, Madhumathi; Smith, Albert V.; O’Connell, Jeffrey R.; Struchalin, Maksim; Tanaka, Toshiko; Li, Guo; Hwang, Shih-Jen; Atkinson, Elizabeth J.; Lohman, Kurt; Cornelis, Marilyn C.; Johansson, Åsa; Tönjes, Anke; Dehghan, Abbas; Couraki, Vincent; Holliday, Elizabeth G.; Sorice, Rossella; Kutalik, Zoltan; Lehtimäki, Terho; Esko, Tõnu; Deshmukh, Harshal; Ulivi, Sheila; Chu, Audrey Y.; Murgia, Federico; Trompet, Stella; Imboden, Medea; Kollerits, Barbara; Pistis, Giorgio; Harris, Tamara B.; Launer, Lenore J.; Aspelund, Thor; Eiriksdottir, Gudny; Mitchell, Braxton D.; Boerwinkle, Eric; Schmidt, Helena; Hofer, Edith; Hu, Frank; Demirkan, Ayse; Oostra, Ben A.; Turner, Stephen T.; Ding, Jingzhong; Andrews, Jeanette S.; Freedman, Barry I.; Giulianini, Franco; Koenig, Wolfgang; Illig, Thomas; Döring, Angela; Wichmann, H.-Erich; Zgaga, Lina; Zemunik, Tatijana; Boban, Mladen; Minelli, Cosetta; Wheeler, Heather E.; Igl, Wilmar; Zaboli, Ghazal; Wild, Sarah H.; Wright, Alan F.; Campbell, Harry; Ellinghaus, David; Nöthlings, Ute; Jacobs, Gunnar; Biffar, Reiner; Ernst, Florian; Homuth, Georg; Kroemer, Heyo K.; Nauck, Matthias; Stracke, Sylvia; Völker, Uwe; Völzke, Henry; Kovacs, Peter; Stumvoll, Michael; Mägi, Reedik; Hofman, Albert; Uitterlinden, Andre G.; Rivadeneira, Fernando; Aulchenko, Yurii S.; Polasek, Ozren; Hastie, Nick; Vitart, Veronique; Helmer, Catherine; Wang, Jie Jin; Stengel, Bénédicte; Ruggiero, Daniela; Bergmann, Sven; Kähönen, Mika; Viikari, Jorma; Nikopensius, Tiit; Province, Michael; Colhoun, Helen; Doney, Alex; Robino, Antonietta; Krämer, Bernhard K.; Portas, Laura; Ford, Ian; Buckley, Brendan M.; Adam, Martin; Thun, Gian-Andri; Paulweber, Bernhard; Haun, Margot; Sala, Cinzia; Mitchell, Paul; Ciullo, Marina; Vollenweider, Peter; Raitakari, Olli; Metspalu, Andres; Palmer, Colin; Gasparini, Paolo; Pirastu, Mario; Jukema, J. Wouter; Probst-Hensch, Nicole M.; Kronenberg, Florian; Toniolo, Daniela; Gudnason, Vilmundur; Shuldiner, Alan R.; Coresh, Josef; Schmidt, Reinhold; Ferrucci, Luigi; van Duijn, Cornelia M.; Borecki, Ingrid; Kardia, Sharon L.R.; Liu, Yongmei; Curhan, Gary C.; Rudan, Igor; Gyllensten, Ulf; Wilson, James F.; Franke, Andre; Pramstaller, Peter P.; Rettig, Rainer; Prokopenko, Inga; Witteman, Jacqueline; Hayward, Caroline; Ridker, Paul M.; Bochud, Murielle; Heid, Iris M.; Siscovick, David S.; Fox, Caroline S.; Kao, W. Linda; Böger, Carsten A.

    2013-01-01

    Many common genetic variants identified by genome-wide association studies for complex traits map to genes previously linked to rare inherited Mendelian disorders. A systematic analysis of common single-nucleotide polymorphisms (SNPs) in genes responsible for Mendelian diseases with kidney phenotypes has not been performed. We thus developed a comprehensive database of genes for Mendelian kidney conditions and evaluated the association between common genetic variants within these genes and kidney function in the general population. Using the Online Mendelian Inheritance in Man database, we identified 731 unique disease entries related to specific renal search terms and confirmed a kidney phenotype in 218 of these entries, corresponding to mutations in 258 genes. We interrogated common SNPs (minor allele frequency >5%) within these genes for association with the estimated GFR in 74,354 European-ancestry participants from the CKDGen Consortium. However, the top four candidate SNPs (rs6433115 at LRP2, rs1050700 at TSC1, rs249942 at PALB2, and rs9827843 at ROBO2) did not achieve significance in a stage 2 meta-analysis performed in 56,246 additional independent individuals, indicating that these common SNPs are not associated with estimated GFR. The effect of less common or rare variants in these genes on kidney function in the general population and disease-specific cohorts requires further research. PMID:24029420

  4. The U.S. Geological Survey Monthly Water Balance Model Futures Portal

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; Markstrom, Steven L.; Emmerich, Christopher; Talbert, Marian

    2017-05-03

    The U.S. Geological Survey Monthly Water Balance Model Futures Portal (https://my.usgs.gov/mows/) is a user-friendly interface that summarizes monthly historical and simulated future conditions for seven hydrologic and meteorological variables (actual evapotranspiration, potential evapotranspiration, precipitation, runoff, snow water equivalent, atmospheric temperature, and streamflow) at locations across the conterminous United States (CONUS).The estimates of these hydrologic and meteorological variables were derived using a Monthly Water Balance Model (MWBM), a modular system that simulates monthly estimates of components of the hydrologic cycle using monthly precipitation and atmospheric temperature inputs. Precipitation and atmospheric temperature from 222 climate datasets spanning historical conditions (1952 through 2005) and simulated future conditions (2020 through 2099) were summarized for hydrographic features and used to drive the MWBM for the CONUS. The MWBM input and output variables were organized into an open-access database. An Open Geospatial Consortium, Inc., Web Feature Service allows the querying and identification of hydrographic features across the CONUS. To connect the Web Feature Service to the open-access database, a user interface—the Monthly Water Balance Model Futures Portal—was developed to allow the dynamic generation of summary files and plots  based on plot type, geographic location, specific climate datasets, period of record, MWBM variable, and other options. Both the plots and the data files are made available to the user for download 

  5. Developing a mesophilic co-culture for direct conversion of cellulose to butanol in consolidated bioprocess.

    PubMed

    Wang, Zhenyu; Cao, Guangli; Zheng, Ju; Fu, Defeng; Song, Jinzhu; Zhang, Junzheng; Zhao, Lei; Yang, Qian

    2015-01-01

    Consolidated bioprocessing (CBP) of butanol production from cellulosic biomass is a promising strategy for cost saving compared to other processes featuring dedicated cellulase production. CBP requires microbial strains capable of hydrolyzing biomass with enzymes produced on its own with high rate and high conversion and simultaneously produce a desired product at high yield. However, current reported butanol-producing candidates are unable to utilize cellulose as a sole carbon source and energy source. Consequently, developing a co-culture system using different microorganisms by taking advantage of their specific metabolic capacities to produce butanol directly from cellulose in consolidated bioprocess is of great interest. This study was mainly undertaken to find complementary organisms to the butanol producer that allow simultaneous saccharification and fermentation of cellulose to butanol in their co-culture under mesophilic condition. Accordingly, a highly efficient and stable consortium N3 on cellulose degradation was first developed by multiple subcultures. Subsequently, the functional microorganisms with 16S rRNA sequences identical to the denaturing gradient gel electrophoresis (DGGE) profile were isolated from consortium N3. The isolate Clostridium celevecrescens N3-2 exhibited higher cellulose-degrading capability was thus chosen as the partner strain for butanol production with Clostridium acetobutylicum ATCC824. Meanwhile, the established stable consortium N3 was also investigated to produce butanol by co-culturing with C. acetobutylicum ATCC824. Butanol was produced from cellulose when C. acetobutylicum ATCC824 was co-cultured with either consortium N3 or C. celevecrescens N3-2. Co-culturing C. acetobutylicum ATCC824 with the stable consortium N3 resulted in a relatively higher butanol concentration, 3.73 g/L, and higher production yield, 0.145 g/g of glucose equivalent. The newly isolated microbial consortium N3 and strain C. celevecrescens N3-2 displayed effective degradation of cellulose and produced considerable amounts of butanol when they were co-cultured with C. acetobutylicum ATCC824. This is the first report of application of co-culture to produce butanol directly from cellulose under mesophilic condition. Our results indicated that co-culture of mesophilic cellulolytic microbe and butanol-producing clostridia provides a technically feasible and more simplified way for producing butanol directly from cellulose.

  6. GAS STORAGE TECHNOLGOY CONSORTIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert W. Watson

    2004-04-23

    Gas storage is a critical element in the natural gas industry. Producers, transmission and distribution companies, marketers, and end users all benefit directly from the load balancing function of storage. The unbundling process has fundamentally changed the way storage is used and valued. As an unbundled service, the value of storage is being recovered at rates that reflect its value. Moreover, the marketplace has differentiated between various types of storage services, and has increasingly rewarded flexibility, safety, and reliability. The size of the natural gas market has increased and is projected to continue to increase towards 30 trillion cubic feetmore » (TCF) over the next 10 to 15 years. Much of this increase is projected to come from electric generation, particularly peaking units. Gas storage, particularly the flexible services that are most suited to electric loads, is critical in meeting the needs of these new markets. In order to address the gas storage needs of the natural gas industry, an industry-driven consortium was created--the Gas Storage Technology Consortium (GSTC). The objective of the GSTC is to provide a means to accomplish industry-driven research and development designed to enhance operational flexibility and deliverability of the Nation's gas storage system, and provide a cost effective, safe, and reliable supply of natural gas to meet domestic demand. To accomplish this objective, the project is divided into three phases that are managed and directed by the GSTC Coordinator. Base funding for the consortium is provided by the U.S. Department of Energy (DOE). In addition, funding is anticipated from the Gas Technology Institute (GTI). The first phase, Phase 1A, was initiated on September 30, 2003, and is scheduled for completion on March 31, 2004. Phase 1A of the project includes the creation of the GSTC structure, development of constitution (by-laws) for the consortium, and development and refinement of a technical approach (work plan) for deliverability enhancement and reservoir management. This report deals with the first 3-months of the project and encompasses the period September 30, 2003, through December 31, 2003. During this 3-month period, the first meeting of individuals representing the storage industry, universities and the Department of energy was held. The purpose of this meeting was to initiate the dialogue necessary to for the creation and adoption of a constitution that would be used to govern the activities of the consortium.« less

  7. GAS STORAGE TECHNOLOGY CONSORTIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert W. Watson

    2004-04-17

    Gas storage is a critical element in the natural gas industry. Producers, transmission and distribution companies, marketers, and end users all benefit directly from the load balancing function of storage. The unbundling process has fundamentally changed the way storage is used and valued. As an unbundled service, the value of storage is being recovered at rates that reflect its value. Moreover, the marketplace has differentiated between various types of storage services, and has increasingly rewarded flexibility, safety, and reliability. The size of the natural gas market has increased and is projected to continue to increase towards 30 trillion cubic feetmore » (TCF) over the next 10 to 15 years. Much of this increase is projected to come from electric generation, particularly peaking units. Gas storage, particularly the flexible services that are most suited to electric loads, is critical in meeting the needs of these new markets. In order to address the gas storage needs of the natural gas industry, an industry-driven consortium was created--the Gas Storage Technology Consortium (GSTC). The objective of the GSTC is to provide a means to accomplish industry-driven research and development designed to enhance operational flexibility and deliverability of the Nation's gas storage system, and provide a cost effective, safe, and reliable supply of natural gas to meet domestic demand. To accomplish this objective, the project is divided into three phases that are managed and directed by the GSTC Coordinator. Base funding for the consortium is provided by the U.S. Department of Energy (DOE). In addition, funding is anticipated from the Gas Technology Institute (GTI). The first phase, Phase 1A, was initiated on September 30, 2003, and is scheduled for completion on March 31, 2004. Phase 1A of the project includes the creation of the GSTC structure, development of constitution (by-laws) for the consortium, and development and refinement of a technical approach (work plan) for deliverability enhancement and reservoir management. This report deals with the second 3-months of the project and encompasses the period December 31, 2003, through March 31, 2003. During this 3-month, the dialogue of individuals representing the storage industry, universities and the Department of energy was continued and resulted in a constitution for the operation of the consortium and a draft of the initial Request for Proposals (RFP).« less

  8. Consortium for materials development in space interaction with Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Lundquist, Charles A.; Seaquist, Valerie

    1992-01-01

    The Consortium for Materials Development in Space (CMDS) is one of seventeen Centers for the Commercial Development of Space (CCDS) sponsored by the Office of Commercial Programs of NASA. The CMDS formed at the University of Alabama in Huntsville in the fall of 1985. The Consortium activities therefore will have progressed for over a decade by the time Space Station Freedom (SSF) begins operation. The topic to be addressed here is: what are the natural, mutually productive relationships between the CMDS and SSF? For management and planning purposes, the Consortium organizes its activities into a number of individual projects. Normally, each project has a team of personnel from industry, university, and often government organizations. This is true for both product-oriented materials projects and for infrastructure projects. For various projects Space Station offers specific mutually productive relationships. First, SSF can provide a site for commercial operations that have evolved as a natural stage in the life cycle of individual projects. Efficiency and associated cost control lead to another important option. With SSF in place, there is the possibility to leave major parts of processing equipment in SSF, and only bring materials to SSF to be processed and return to earth the treated materials. This saves the transportation costs of repeatedly carrying heavy equipment to orbit and back to the ground. Another generic feature of commercial viability can be the general need to accomplish large through-put or large scale operations. The size of SSF lends itself to such needs. Also in addition to processing equipment, some of the other infrastructure capabilities developed in CCDS projects may be applied on SSF to support product activities. The larger SSF program may derive mutual benefits from these infrastructure abilities.

  9. 76 FR 66932 - The National Cancer Institute (NCI) Announces the Initiation of a Public Private Industry...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-28

    ...The Alliance for Nanotechnology in Cancer of the National Cancer Institute (NCI) is initiating a public private industry partnership called TONIC (Translation Of Nanotechnology In Cancer) to promote translational research and development opportunities of nanotechnology-based cancer solutions. An immediate consequence of this effort will be the formation of a consortium involving government and pharmaceutical, and biotechnology companies. This consortium will evaluate promising nanotechnology platforms and facilitate their successful translation from academic research to clinical environment, resulting in safe, timely, effective and novel diagnosis and treatment options for cancer patients. The purpose of this notice is to inform the community about the Alliance for Nanotechnology in Cancer of NCI's intention to form the consortium and to invite eligible companies (as defined in last paragraph) to participate.

  10. Institutionalization of Reduction of Total Ownership Costs (R-TOC) Principles. Part 1: Lessons Learned from Special Interest Programs

    DTIC Science & Technology

    2010-12-01

    Life Cycle Cost Process Model (Austin, TX: The Consortium for Advanced Management International) 6 November 2009. 8 The framework begins with...Hendricks, James R. Involving the Extended Value Chain in a Target Costing/ Life Cycle Cost Process Model. Austin, TX: The Consortium for Advanced ...can have on reducing ownership costs in hundreds of other DOD programs. The early life -cycle phases (requirements/concept development) are often the

  11. Rapid Generation and Testing of a Lassa Fever Vaccine Using VaxCelerate Platform

    DTIC Science & Technology

    2014-08-28

    essentially the same way each time but is capable to producing effective vaccine responses to a range of pathogens, and to do this without the use of...this distributed vaccine development consortium to rapidly produce and test a novel vaccine of relevance to public health responses. In parallel...with this effort, the consortium produced and tested a modified version of its self-assembling vaccine protein that used a subunit of the full

  12. Acceleration, Transport, Forecasting and Impact of solar energetic particles in the framework of the 'HESPERIA' HORIZON 2020 project

    NASA Astrophysics Data System (ADS)

    Malandraki, Olga; Klein, Karl-Ludwig; Vainio, Rami; Agueda, Neus; Nunez, Marlon; Heber, Bernd; Buetikofer, Rolf; Sarlanis, Christos; Crosby, Norma

    2017-04-01

    High-energy solar energetic particles (SEPs) emitted from the Sun are a major space weather hazard motivating the development of predictive capabilities. In this work, the current state of knowledge on the origin and forecasting of SEP events will be reviewed. Subsequently, we will present the EU HORIZON2020 HESPERIA (High Energy Solar Particle Events foRecastIng and Analysis) project, its structure, its main scientific objectives and forecasting operational tools, as well as the added value to SEP research both from the observational as well as the SEP modelling perspective. The project addresses through multi-frequency observations and simulations the chain of processes from particle acceleration in the corona, particle transport in the magnetically complex corona and interplanetary space to the detection near 1 AU. Furthermore, publicly available software to invert neutron monitor observations of relativistic SEPs to physical parameters that can be compared with space-borne measurements at lower energies is provided for the first time by HESPERIA. In order to achieve these goals, HESPERIA is exploiting already available large datasets stored in databases such as the neutron monitor database (NMDB) and SEPServer that were developed under EU FP7 projects from 2008 to 2013. Forecasting results of the two novel SEP operational forecasting tools published via the consortium server of 'HESPERIA' will be presented, as well as some scientific key results on the acceleration, transport and impact on Earth of high-energy particles. Acknowledgement: This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637324.

  13. Recommendations From the International Consortium on Professional Nursing Practice in Long-Term Care Homes.

    PubMed

    McGilton, Katherine S; Bowers, Barbara J; Heath, Hazel; Shannon, Kay; Dellefield, Mary Ellen; Prentice, Dawn; Siegel, Elena O; Meyer, Julienne; Chu, Charlene H; Ploeg, Jenny; Boscart, Veronique M; Corazzini, Kirsten N; Anderson, Ruth A; Mueller, Christine A

    2016-02-01

    In response to the International Association of Gerontology and Geriatrics' global agenda for clinical research and quality of care in long-term care homes (LTCHs), the International Consortium on Professional Nursing Practice in Long Term Care Homes (the Consortium) was formed to develop nursing leadership capacity and address the concerns regarding the current state of professional nursing practice in LTCHs. At its invitational, 2-day inaugural meeting, the Consortium brought together international nurse experts to explore the potential of registered nurses (RNs) who work as supervisors or charge nurses within the LTCHs and the value of their contribution in nursing homes, consider what RN competencies might be needed, discuss effective educational (curriculum and practice) experiences, health care policy, and human resources planning requirements, and to identify what sustainable nurse leadership strategies and models might enhance the effectiveness of RNs in improving resident, family, and staff outcomes. The Consortium made recommendations about the following priority issues for action: (1) define the competencies of RNs required to care for older adults in LTCHs; (2) create an LTCH environment in which the RN role is differentiated from other team members and RNs can practice to their full scope; and (3) prepare RN leaders to operate effectively in person-centered care LTCH environments. In addition to clear recommendations for practice, the Consortium identified several areas in which further research is needed. The Consortium advocated for a research agenda that emphasizes an international coordination of research efforts to explore similar issues, the pursuit of examining the impact of nursing and organizational models, and the showcasing of excellence in nursing practice in care homes, so that others might learn from what works. Several studies already under way are also described. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  14. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets.

    PubMed

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; Del-Toro, Noemi; Dianes, Jose A; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE.The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX "complete" submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets*

    PubMed Central

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; del-Toro, Noemi; Dianes, Jose A.; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE. The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX “complete” submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. PMID:26545397

  16. Multicenter Approach to Recurrent Acute and Chronic Pancreatitis in the United States: The North American Pancreatitis Study 2 (NAPS2)

    PubMed Central

    Whitcomb, David C.; Yadav, Dhiraj; Adam, Slivka; Hawes, Robert H.; Brand, Randall E.; Anderson, Michelle A.; Money, Mary E.; Banks, Peter A.; Bishop, Michele D.; Baillie, John; Sherman, Stuart; DiSario, James; Burton, Frank R.; Gardner, Timothy B.; Amann, Stephen T.; Gelrud, Andres; Lo, Simon K.; DeMeo, Mark T.; Steinberg, William M.; Kochman, Michael L.; Etemad, Babak; Forsmark, Christopher E.; Elinoff, Beth; Greer, Julia B.; O’Connell, Michael; Lamb, Janette; Barmada, M. Michael

    2008-01-01

    Background Recurrent acute pancreatitis (RAP) and chronic pancreatitis (CP) are complex syndromes associated with numerous etiologies, clinical variables and complications. We developed the North American Pancreatitis Study 2 (NAPS2) to be sufficiently powered to understand the complex environmental, metabolic and genetic mechanisms underlying RAP and CP. Methods Between August 2000 and September 2006, a consortium of 20 expert academic and private sites prospectively ascertained 1,000 human subjects with RAP or CP, plus 695 controls (spouse, family, friend or unrelated). Standardized questionnaires were completed by both the physicians and study subjects and blood was drawn for genomic DNA and biomarker studies. All data were double-entered into a database and systematically reviewed to minimize errors and include missing data. Results A total of 1,000 subjects (460 RAP, 540 CP) and 695 controls who completed consent forms and questionnaires and donated blood samples comprised the final dataset. Data were organized according to diagnosis, supporting documentation, etiological classification, clinical signs and symptoms (including pain patterns and duration, and quality of life), past medical history, family history, environmental exposures (including alcohol and tobacco use), medication use and therapeutic interventions. Upon achieving the target enrollment, data were organized and classified to facilitate future analysis. The approaches, rationale and datasets are described, along with final demographic results. Conclusion The NAPS2 consortium has successfully completed a prospective ascertainment of 1,000 subjects with RAP and CP from the USA. These data will be useful in elucidating the environmental, metabolic and genetic conditions, and to investigate the complex interactions that underlie RAP and CP. PMID:18765957

  17. Korean Variant Archive (KOVA): a reference database of genetic variations in the Korean population.

    PubMed

    Lee, Sangmoon; Seo, Jihae; Park, Jinman; Nam, Jae-Yong; Choi, Ahyoung; Ignatius, Jason S; Bjornson, Robert D; Chae, Jong-Hee; Jang, In-Jin; Lee, Sanghyuk; Park, Woong-Yang; Baek, Daehyun; Choi, Murim

    2017-06-27

    Despite efforts to interrogate human genome variation through large-scale databases, systematic preference toward populations of Caucasian descendants has resulted in unintended reduction of power in studying non-Caucasians. Here we report a compilation of coding variants from 1,055 healthy Korean individuals (KOVA; Korean Variant Archive). The samples were sequenced to a mean depth of 75x, yielding 101 singleton variants per individual. Population genetics analysis demonstrates that the Korean population is a distinct ethnic group comparable to other discrete ethnic groups in Africa and Europe, providing a rationale for such independent genomic datasets. Indeed, KOVA conferred 22.8% increased variant filtering power in addition to Exome Aggregation Consortium (ExAC) when used on Korean exomes. Functional assessment of nonsynonymous variant supported the presence of purifying selection in Koreans. Analysis of copy number variants detected 5.2 deletions and 10.3 amplifications per individual with an increased fraction of novel variants among smaller and rarer copy number variable segments. We also report a list of germline variants that are associated with increased tumor susceptibility. This catalog can function as a critical addition to the pre-existing variant databases in pursuing genetic studies of Korean individuals.

  18. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number ofmore » features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.« less

  20. Large scale validation of the M5L lung CAD on heterogeneous CT datasets.

    PubMed

    Torres, E Lopez; Fiorina, E; Pennazio, F; Peroni, C; Saletta, M; Camarlinghi, N; Fantacci, M E; Cerello, P

    2015-04-01

    M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.

  1. Progress connecting multi-disciplinary geoscience communities through the VIVO semantic web application

    NASA Astrophysics Data System (ADS)

    Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.

    2015-12-01

    UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.

  2. Mobile satellite service in the United States

    NASA Technical Reports Server (NTRS)

    Agnew, Carson E.; Bhagat, Jai; Hopper, Edwin A.; Kiesling, John D.; Exner, Michael L.; Melillo, Lawrence; Noreen, Gary K.; Parrott, Billy J.

    1988-01-01

    Mobile satellite service (MSS) has been under development in the United States for more than two decades. The service will soon be provided on a commercial basis by a consortium of eight U.S. companies called the American Mobile Satellite Consortium (AMSC). AMSC will build a three-satellite MSS system that will offer superior performance, reliability and cost effectiveness for organizations requiring mobile communications across the U.S. The development and operation of MSS in North America is being coordinated with Telesat Canada and Mexico. AMSC expects NASA to provide launch services in exchange for capacity on the first AMSC satellite for MSAT-X activities and for government demonstrations.

  3. Comparative biodegradation of HDPE and LDPE using an indigenously developed microbial consortium.

    PubMed

    Satlewal, Alok; Soni, Ravindra; Zaidi, Mgh; Shouche, Yogesh; Goel, Reeta

    2008-03-01

    A variety of bacterial strains were isolated from waste disposal sites of Uttaranchal, India, and some from artificially developed soil beds containing maleic anhydride, glucose, and small pieces of polyethylene. Primary screening of isolates was done based on their ability to utilize high- and low-density polyethylenes (HDPE/LDPE) as a primary carbon source. Thereafter, a consortium was developed using potential strains. Furthermore, a biodegradation assay was carried out in 500-ml flasks containing minimal broth (250 ml) and HDPE/ LDPE at 5 mg/ml concentration. After incubation for two weeks, degraded samples were recovered through filtration and subsequent evaporation. Fourier transform infrared spectroscopy (FTIR) and simultaneous thermogravimetric-differential thermogravimetry-differential thermal analysis TG-DTG-DTA) were used to analyze these samples. Results showed that consortium-treated HDPE (considered to be more inert relative to LDPE) was degraded to a greater extent 22.41% weight loss) in comparison with LDPE (21.70% weight loss), whereas, in the case of untreated samples, weight loss was more for LDPE than HDPE (4.5% and 2.5%, respectively) at 400 degrees . Therefore, this study suggests that polyethylene could be degraded by utilizing microbial consortia in an eco-friendly manner.

  4. A research-based inter-institutional collaboration to diversify the biomedical workforce: ReBUILDetroit.

    PubMed

    Andreoli, Jeanne M; Feig, Andrew; Chang, Steven; Welch, Sally; Mathur, Ambika; Kuleck, Gary

    2017-01-01

    Faced with decades of severe economic decline, the city of Detroit, Michigan (USA) is on the cusp or reinventing itself. A Consortium was formed of three higher education institutions that have an established mission to serve an urban population and a vested interest in the revitalization of the health, welfare, and economic opportunity in the Detroit metro region that is synergistic with national goals to diversify the biomedical workforce. The purpose of this article is to describe the rationale, approach, and model of the Research Enhancement for BUILDing Detroit (ReBUILDetroit) Consortium, as a cross-campus collaborative for students, faculty, and institutional development. The ReBUILDetroit program is designed to transform the culture of higher education in Detroit, Michigan by educating and training students from diverse and socio-economically disadvantaged backgrounds to become the next generation of biomedical researchers. Marygrove College, University of Detroit Mercy, and Wayne State University established a Consortium to create and implement innovative, evidence-based and cutting-edge programming. Specific elements include: (1) a pre-college summer enrichment experience; (2) an inter-institutional curricular re-design of target foundational courses in biology, chemistry and social science using the Research Coordination Network (RCN) model; and (3) cross-institutional summer faculty-mentored research projects for ReBUILDetroit Scholars starting as rising sophomores. Student success support includes intentional and intrusive mentoring, financial support, close faculty engagement, ongoing workshops to overcome academic and non-academic barriers, and cohort building activities across the Consortium. Institutional supports, integral to program creation and sustainability, include creating faculty learning communities grounded in professional development opportunities in pedagogy, research and mentorship, and developing novel partnerships and accelerated pipeline programming across the Consortium. This article highlights the development, implementation and evolution of high-impact practices critical for student learning, research-based course development, and the creation of inter-institutional learning communities as a direct result of ReBUILDetroit. Our cross-institutional collaboration and leveraging of resources in a difficult economic environment, drawing students from high schools with a myriad of strengths and challenges, serves as a model for higher education institutions in large, urban centers who are seeking to diversify their workforces and provide additional opportunities for upward mobility among diverse populations.

  5. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  6. Mass spectrometry based lipid(ome) analyzer and molecular platform: a new software to interpret and analyze electrospray and/or matrix-assisted laser desorption/ionization mass spectrometric data of lipids: a case study from Mycobacterium tuberculosis.

    PubMed

    Sabareesh, Varatharajan; Singh, Gurpreet

    2013-04-01

    Mass Spectrometry based Lipid(ome) Analyzer and Molecular Platform (MS-LAMP) is a new software capable of aiding in interpreting electrospray ionization (ESI) and/or matrix-assisted laser desorption/ionization (MALDI) mass spectrometric data of lipids. The graphical user interface (GUI) of this standalone programme is built using Perl::Tk. Two databases have been developed and constituted within MS-LAMP, on the basis of Mycobacterium tuberculosis (M. tb) lipid database (www.mrl.colostate.edu) and that of Lipid Metabolites and Pathways Strategy Consortium (LIPID MAPS; www.lipidmaps.org). Different types of queries entered through GUI would interrogate with a chosen database. The queries can be molecular mass(es) or mass-to-charge (m/z) value(s) and molecular formula. LIPID MAPS identifier also can be used to search but not for M. tb lipids. Multiple choices have been provided to select diverse ion types and lipids. Satisfying to input parameters, a glimpse of various lipid categories and their population distribution can be viewed in the output. Additionally, molecular structures of lipids in the output can be seen using ChemSketch (www.acdlabs.com), which has been linked to the programme. Furthermore, a version of MS-LAMP for use in Linux operating system is separately available, wherein PyMOL can be used to view molecular structures that result as output from General Lipidome MS-LAMP. The utility of this software is demonstrated using ESI mass spectrometric data of lipid extracts of M. tb grown under two different pH (5.5 and 7.0) conditions. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Program Evaluation - Automotive Lightweighting Materials Program Research and Development Projects Assessment of Benefits - Case Studies No. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, S.

    This report is the second of a series of studies to evaluate research and development (R&D) projects funded by the Automotive Lightweighting Materials (ALM) Program of the Office of Advanced Automotive Technologies (OAAT) of the U.S. Department of Energy (DOE). The objectives of the program evaluation are to assess short-run outputs and long-run outcomes that may be attributable to the ALM R&D projects. The ALM program focuses on the development and validation of advanced technologies that significantly reduce automotive vehicle body and chassis weight without compromising other attributes such as safety, performance, recyclability, and cost. Funded projects range from fundamentalmore » materials science research to applied research in production environments. Collaborators on these projects include national laboratories, universities, and private sector firms, such as leading automobile manufacturers and their suppliers. Three ALM R&D projects were chosen for this evaluation: Design and Product Optimization for Cast Light Metals, Durability of Lightweight Composite Structures, and Rapid Tooling for Functional Prototyping of Metal Mold Processes. These projects were chosen because they have already been completed. The first project resulted in development of a comprehensive cast light metal property database, an automotive application design guide, computerized predictive models, process monitoring sensors, and quality assurance methods. The second project, the durability of lightweight composite structures, produced durability-based design criteria documents, predictive models for creep deformation, and minimum test requirements and suggested test methods for establishing durability properties and characteristics of random glass-fiber composites for automotive structural composites. The durability project supported Focal Project II, a validation activity that demonstrates ALM program goals and reduces the lead time for bringing new technology into the marketplace. Focal projects concentrate on specific classes of materials and nonproprietary components and are done jointly by DOE and the Automotive Composites Consortium of U.S. Council for Automotive Research (USCAR). The third project developed a rapid tooling process that reduces tooling time, originally some 48-52 weeks, to less than 12 weeks by means of rapid generation of die-casting die inserts and development of generic holding blocks, suitable for use with large casting applications. This project was conducted by the United States Automotive Materials Partnership, another USCAR consortium.« less

  8. External RNA Controls Consortium Beta Version Update.

    PubMed

    Lee, Hangnoh; Pine, P Scott; McDaniel, Jennifer; Salit, Marc; Oliver, Brian

    2016-01-01

    Spike-in RNAs are valuable controls for a variety of gene expression measurements. The External RNA Controls Consortium developed test sets that were used in a number of published reports. Here we provide an authoritative table that summarizes, updates, and corrects errors in the test version that ultimately resulted in the certified Standard Reference Material 2374. We have noted existence of anti-sense RNA controls in the material, corrected sub-pool memberships, and commented on control RNAs that displayed inconsistent behavior.

  9. Machine Tool Advanced Skills Technology (MAST). Common Ground: Toward a Standards-Based Training System for the U.S. Machine Tool and Metal Related Industries. Volume 1: Executive Summary, of a 15-Volume Set of Skills Standards and Curriculum Training Materials for the Precision Manufacturing Industry.

    ERIC Educational Resources Information Center

    Texas State Technical Coll., Waco.

    The Machine Tool Advanced Skills Technology (MAST) consortium was formed to address the shortage of skilled workers for the machine tools and metals-related industries. Featuring six of the nation's leading advanced technology centers, the MAST consortium developed, tested, and disseminated industry-specific skill standards and model curricula for…

  10. University Research Consortium annual review meeting program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-07-01

    This brochure presents the program for the first annual review meeting of the University Research Consortium (URC) of the Idaho National Engineering Laboratory (INEL). INEL is a multiprogram laboratory with a distinctive role in applied engineering. It also conducts basic science research and development, and complex facility operations. The URC program consists of a portfolio of research projects funded by INEL and conducted at universities in the United States. In this program, summaries and participant lists for each project are presented as received from the principal investigators.

  11. FIN-EPOS - Finnish national initiative of the European Plate Observing System: Bringing Finnish solid Earth infrastructures into EPOS

    NASA Astrophysics Data System (ADS)

    Vuorinen, Tommi; Korja, Annakaisa

    2017-04-01

    FIN-EPOS consortium is a joint community of Finnish national research institutes tasked with operating and maintaining solid-earth geophysical and geological observatories and laboratories in Finland. These national research infrastructures (NRIs) seek to join EPOS research infrastructure (EPOS RI) and further pursue Finland's participation as a founding member in EPOS ERIC (European Research Infrastructure Consortium). Current partners of FIN-EPOS are the University of Helsinki (UH), the University of and Oulu (UO), Finnish Geospatial Research Institute (FGI) of the National Land Survey (NLS), Finnish Meteorological Institute (FMI), Geological Survey of Finland (GTK), CSC - IT Center for Science and MIKES Metrology at VTT Technical Research Centre of Finland Ltd. The consortium is hosted by the Institute of Seismology, UH (ISUH). The primary purpose of the consortium is to act as a coordinating body between various NRIs and the EPOS RI. FIN-EPOS engages in planning and development of the national EPOS RI and will provide support in EPOS implementation phase (IP) for the partner NRIs. FIN-EPOS also promotes the awareness of EPOS in Finland and is open to new partner NRIs that would benefit from participating in EPOS. The consortium additionally seeks to advance solid Earth science education, technologies and innovations in Finland and is actively engaging in Nordic co-operation and collaboration of solid Earth RIs. The main short term objective of FIN-EPOS is to make Finnish geoscientific data provided by NRIs interoperable with the Thematic Core Services (TCS) in the EPOS IP. Consortium partners commit into applying and following metadata and data format standards provided by EPOS. FIN-EPOS will also provide a national Finnish language web portal where users are identified and their user rights for EPOS resources are defined.

  12. Removal of a mixture of pesticides by a Streptomyces consortium: Influence of different soil systems.

    PubMed

    Fuentes, María S; Raimondo, Enzo E; Amoroso, María J; Benimeli, Claudia S

    2017-04-01

    Although the use of organochlorine pesticides (OPs) is restricted or banned in most countries, they continue posing environmental and health concerns, so it is imperative to develop methods for removing them from the environment. This work is aimed to investigate the simultaneous removal of three OPs (lindane, chlordane and methoxychlor) from diverse types of systems by employing a native Streptomyces consortium. In liquid systems, a satisfactory microbial growth was observed accompanied by removal of lindane (40.4%), methoxychlor (99.5%) and chlordane (99.8%). In sterile soil microcosms, the consortium was able to grow without significant differences in the different textured soils (clay silty loam, sandy and loam), both contaminated or not contaminated with the OPs-mixture. The Streptomyces consortium was able to remove all the OPs in sterile soil microcosm (removal order: clay silty loam > loam > sandy). So, clay silty loam soil (CSLS) was selected for next assays. In non-sterile CSLS microcosms, chlordane removal was only about 5%, nonetheless, higher rates was observed for lindane (11%) and methoxychlor (20%). In CSLS slurries, the consortium exhibited similar growth levels, in the presence of or in the absence of the OPs-mixture. Not all pesticides were removed in the same way; the order of pesticide dissipation was: methoxychlor (26%)>lindane (12.5%)>chlordane (10%). The outlines of microbial growth and pesticides removal provide information about using actinobacteria consortium as strategies for bioremediation of OPs-mixture in diverse soil systems. Texture of soils and assay conditions (sterility, slurry formulation) were determining factors influencing the removal of each pesticide of the mixture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, K; Phillips, M; Fishburn, M

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less

  14. [TOPICS-MDS: a versatile resource for generating scientific and social knowledge for elderly care].

    PubMed

    van den Brink, Danielle; Lutomski, Jennifer E; Qin, Li; den Elzen, Wendy P J; Kempen, Gertrudis I J M; Krabbe, Paul F M; Steyerberg, Ewout W; Muntinga, Maaike; Moll van Charante, Eric P; Bleijenberg, Nienke; Olde Rikkert, Marcel G M; Melis, René J F

    2015-04-01

    Developed as part of the National Care for the Elderly Programme (NPO), TOPICS-MDS is a uniform, national database on the health and wellbeing of the older persons and caregivers who participated in NPO-funded projects. TOPICS-MDS Consortium has gained extensive experience in constructing a standardized questionnaire to collect relevant health care data on quality of life, health services utilization, and informal care use. A proactive approach has been undertaken not only to ensure the standardization and validation of instruments but also the infrastructure for external data requests. Efforts have been made to promote scientifically and socially responsible use of TOPICS-MDS; data has been available for secondary use since early 2014. Through this data sharing initiative, researchers can explore health issues in a broader framework which may have not been possible within individual NPO projects; this broader framework is highly relevant for influencing health policy. In this article, we provide an overview of the development and on-going progress of TOPICS-MDS. We further describe how information derived from TOPICS-MDS can be applied to facilitate future scientific innovations and public health initiatives to improve care for frail older persons and their caregivers.

  15. Classification of malignant and benign lung nodules using taxonomic diversity index and phylogenetic distance.

    PubMed

    de Sousa Costa, Robherson Wector; da Silva, Giovanni Lucca França; de Carvalho Filho, Antonio Oseas; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo

    2018-05-23

    Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. Therefore, this study proposes a methodology for diagnosis of lung nodules in benign and malignant tumors based on image processing and pattern recognition techniques. Mean phylogenetic distance (MPD) and taxonomic diversity index (Δ) were used as texture descriptors. Finally, the genetic algorithm in conjunction with the support vector machine were applied to select the best training model. The proposed methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 93.42%, specificity of 91.21%, accuracy of 91.81%, and area under the ROC curve of 0.94. The results demonstrate the promising performance of texture extraction techniques using mean phylogenetic distance and taxonomic diversity index combined with phylogenetic trees. Graphical Abstract Stages of the proposed methodology.

  16. EarthChem: International Collaboration for Solid Earth Geochemistry in Geoinformatics

    NASA Astrophysics Data System (ADS)

    Walker, J. D.; Lehnert, K. A.; Hofmann, A. W.; Sarbas, B.; Carlson, R. W.

    2005-12-01

    The current on-line information systems for igneous rock geochemistry - PetDB, GEOROC, and NAVDAT - convincingly demonstrate the value of rigorous scientific data management of geochemical data for research and education. The next generation of hypothesis formulation and testing can be vastly facilitated by enhancing these electronic resources through integration of available datasets, expansion of data coverage in location, time, and tectonic setting, timely updates with new data, and through intuitive and efficient access and data analysis tools for the broader geosciences community. PetDB, GEOROC, and NAVDAT have therefore formed the EarthChem consortium (www.earthchem.org) as a international collaborative effort to address these needs and serve the larger earth science community by facilitating the compilation, communication, serving, and visualization of geochemical data, and their integration with other geological, geochronological, geophysical, and geodetic information to maximize their scientific application. We report on the status of and future plans for EarthChem activities. EarthChem's development plan includes: (1) expanding the functionality of the web portal to become a `one-stop shop for geochemical data' with search capability across databases, standardized and integrated data output, generally applicable tools for data quality assessment, and data analysis/visualization including plotting methods and an information-rich map interface; and (2) expanding data holdings by generating new datasets as identified and prioritized through community outreach, and facilitating data contributions from the community by offering web-based data submission capability and technical assistance for design, implementation, and population of new databases and their integration with all EarthChem data holdings. Such federated databases and datasets will retain their identity within the EarthChem system. We also plan on working with publishers to ease the assimilation of geochemical data into the EarthChem database. As a community resource, EarthChem will address user concerns and respond to broad scientific and educational needs. EarthChem will hold yearly workshops, town hall meetings, and/or exhibits at major meetings. The group has established a two-tier committee structure to help ease the communication and coordination of database and IT issues between existing data management projects, and to receive feedback and support from individuals and groups from the larger geosciences community.

  17. The contribution of nurses to incident disclosure: a narrative review.

    PubMed

    Harrison, Reema; Birks, Yvonne; Hall, Jill; Bosanquet, Kate; Harden, Melissa; Iedema, Rick

    2014-02-01

    To explore (a) how nurses feel about disclosing patient safety incidents to patients, (b) the current contribution that nurses make to the process of disclosing patient safety incidents to patients and (c) the barriers that nurses report as inhibiting their involvement in disclosure. A systematic search process was used to identify and select all relevant material. Heterogeneity in study design of the included articles prohibited a meta-analysis and findings were therefore synthesised in a narrative review. A range of text words, synonyms and subject headings were developed in conjunction with the York Centre for Reviews and Dissemination and used to undertake a systematic search of electronic databases (MEDLINE; EMBASE; CENTRAL; PsycINFO; Health Management and Information Consortium; CINAHL; ASSIA; Science Citation Index; Social Science Citation Index; Cochrane Database of Systematic Reviews; Database of Abstracts of Reviews of Effects; Health Technology Assessment Database; Health Systems Evidence; PASCAL; LILACS). Retrieval of studies was restricted to those published after 1980. Further data sources were: websites, grey literature, research in progress databases, hand-searching of relevant journals and author contact. The title and abstract of each citation was independently screened by two reviewers and disagreements resolved by consensus or consultation with a third person. Full text articles retrieved were further screened against the inclusion and exclusion criteria then checked by a second reviewer (YB). Relevant data were extracted and findings were synthesised in a narrative empirical synthesis. The systematic search and selection process identified 15 publications which included 11 unique studies that emerged from a range of locations. Findings suggest that nurses currently support both physicians and patients through incident disclosure, but may be ill-prepared to disclose incidents independently. Barriers to nurse involvement included a lack of opportunities for education and training, and the multiple and sometimes conflicting roles within nursing. Numerous potential benefits were identified that may result from nurses having a greater contribution to the disclosure process, but the provision of support and training is essential to overcome the reported barriers faced by nurses internationally. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research

    PubMed Central

    Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.

    2010-01-01

    Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029

  19. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research.

    PubMed

    Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J

    2010-08-10

    Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.

  20. Meeting Report from the Genomic Standards Consortium (GSC) Workshop 8

    PubMed Central

    Kyrpides, Nikos; Field, Dawn; Sterk, Peter; Kottmann, Renzo; Glöckner, Frank Oliver; Hirschman, Lynette; Garrity, George M.; Cochrane, Guy; Wooley, John

    2010-01-01

    This report summarizes the proceedings of the 8th meeting of the Genomic Standards Consortium held at the Department of Energy Joint Genome Institute in Walnut Creek, CA, USA on September 9-11, 2009. This three-day workshop marked the maturing of Genomic Standards Consortium from an informal gathering of researchers interested in developing standards in the field of genomic and metagenomics to an established community with a defined governance mechanism, its own open access journal, and a family of established standards for describing genomes, metagenomes and marker studies (i.e. ribosomal RNA gene surveys). There will be increased efforts within the GSC to reach out to the wider scientific community via a range of new projects. Further information about the GSC and its activities can be found at http://gensc.org/. PMID:21304696

Top