Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2011-02-15
Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less
Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim
2004-04-01
Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans
NASA Astrophysics Data System (ADS)
Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.
2008-03-01
Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.
Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R
2015-05-01
We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
Dictionary learning-based CT detection of pulmonary nodules
NASA Astrophysics Data System (ADS)
Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong
2016-10-01
Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.
Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo
2016-01-01
Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jiahui; Engelmann, Roger; Li Qiang
2007-12-15
Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key 'spiral-scanning' technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the 'north pole' to the 'south pole'. Themore » voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the 'optimal' outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.« less
The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth".
Armato, Samuel G; Roberts, Rachael Y; McNitt-Gray, Michael F; Meyer, Charles R; Reeves, Anthony P; McLennan, Geoffrey; Engelmann, Roger M; Bland, Peyton H; Aberle, Denise R; Kazerooni, Ella A; MacMahon, Heber; van Beek, Edwin J R; Yankelevitz, David; Croft, Barbara Y; Clarke, Laurence P
2007-12-01
Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish "truth" for algorithm development, training, and testing. The integrity of this "truth," however, must be established before investigators commit to this "gold standard" as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the "truth" collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the "blinded read phase"), radiologists independently identified and annotated lesions, assigning each to one of three categories: "nodule >or=3 mm," "nodule <3 mm," or "non-nodule >or=3 mm." For the second read (the "unblinded read phase"), the same radiologists independently evaluated the same CT scans, but with all of the annotations from the previously performed blinded reads presented; each radiologist could add to, edit, or delete their own marks; change the lesion category of their own marks; or leave their marks unchanged. The post-unblinded read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of identification of potential errors introduced during the complete image annotation process and correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. The establishment of "truth" must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems.
Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng
2017-03-01
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
de Sousa Costa, Robherson Wector; da Silva, Giovanni Lucca França; de Carvalho Filho, Antonio Oseas; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo
2018-05-23
Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. Therefore, this study proposes a methodology for diagnosis of lung nodules in benign and malignant tumors based on image processing and pattern recognition techniques. Mean phylogenetic distance (MPD) and taxonomic diversity index (Δ) were used as texture descriptors. Finally, the genetic algorithm in conjunction with the support vector machine were applied to select the best training model. The proposed methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 93.42%, specificity of 91.21%, accuracy of 91.81%, and area under the ROC curve of 0.94. The results demonstrate the promising performance of texture extraction techniques using mean phylogenetic distance and taxonomic diversity index combined with phylogenetic trees. Graphical Abstract Stages of the proposed methodology.
Castro, Alfonso; Boveda, Carmen; Arcay, Bernardino; Sanjurjo, Pedro
2016-01-01
The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium). PMID:27517049
The Lung Image Database Consortium (LIDC): Ensuring the integrity of expert-defined “truth”
Armato, Samuel G.; Roberts, Rachael Y.; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; McLennan, Geoffrey; Engelmann, Roger M.; Bland, Peyton H.; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; Croft, Barbara Y.; Clarke, Laurence P.
2007-01-01
Rationale and Objectives Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish “truth” for algorithm development, training, and testing. The integrity of this “truth,” however, must be established before investigators commit to this “gold standard” as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the “truth” collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. Materials and Methods One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the “blinded read phase”), radiologists independently identified and annotated lesions, assigning each to one of three categories: “nodule ≥ 3mm,” “nodule < 3mm,” or “non-nodule ≥ 3mm.” For the second read (the “unblinded read phase”), the same radiologists independently evaluated the same CT scans but with all of the annotations from the previously performed blinded reads presented; each radiologist could add marks, edit or delete their own marks, change the lesion category of their own marks, or leave their marks unchanged. The post-unblinded-read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of (1) identification of potential errors introduced during the complete image annotation process (such as two marks on what appears to be a single lesion or an incomplete nodule contour) and (2) correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. Results A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. Conclusion The establishment of “truth” must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems. PMID:18035275
NASA Astrophysics Data System (ADS)
Hancock, Matthew C.; Magnan, Jerry F.
2017-03-01
To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Hybrid detection of lung nodules on CT scan images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Lin; Tan, Yongqiang; Schwartz, Lawrence H.
Purpose: The diversity of lung nodules poses difficulty for the current computer-aided diagnostic (CAD) schemes for lung nodule detection on computed tomography (CT) scan images, especially in large-scale CT screening studies. We proposed a novel CAD scheme based on a hybrid method to address the challenges of detection in diverse lung nodules. Methods: The hybrid method proposed in this paper integrates several existing and widely used algorithms in the field of nodule detection, including morphological operation, dot-enhancement based on Hessian matrix, fuzzy connectedness segmentation, local density maximum algorithm, geodesic distance map, and regression tree classification. All of the adopted algorithmsmore » were organized into tree structures with multi-nodes. Each node in the tree structure aimed to deal with one type of lung nodule. Results: The method has been evaluated on 294 CT scans from the Lung Image Database Consortium (LIDC) dataset. The CT scans were randomly divided into two independent subsets: a training set (196 scans) and a test set (98 scans). In total, the 294 CT scans contained 631 lung nodules, which were annotated by at least two radiologists participating in the LIDC project. The sensitivity and false positive per scan for the training set were 87% and 2.61%. The sensitivity and false positive per scan for the testing set were 85.2% and 3.13%. Conclusions: The proposed hybrid method yielded high performance on the evaluation dataset and exhibits advantages over existing CAD schemes. We believe that the present method would be useful for a wide variety of CT imaging protocols used in both routine diagnosis and screening studies.« less
Computerized lung cancer malignancy level analysis using 3D texture features
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei
2016-03-01
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
Assessing operating characteristics of CAD algorithms in the absence of a gold standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.
2010-04-15
Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.
Computer aided lung cancer diagnosis with deep learning algorithms
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Qian, Wei
2016-03-01
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia; Hou, Beibei; Zhang, Ningbo
2017-01-01
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492
NASA Astrophysics Data System (ADS)
Ramachandran S., Sindhu; George, Jose; Skaria, Shibon; V. V., Varun
2018-02-01
Lung cancer is the leading cause of cancer related deaths in the world. The survival rate can be improved if the presence of lung nodules are detected early. This has also led to more focus being given to computer aided detection (CAD) and diagnosis of lung nodules. The arbitrariness of shape, size and texture of lung nodules is a challenge to be faced when developing these detection systems. In the proposed work we use convolutional neural networks to learn the features for nodule detection, replacing the traditional method of handcrafting features like geometric shape or texture. Our network uses the DetectNet architecture based on YOLO (You Only Look Once) to detect the nodules in CT scans of lung. In this architecture, object detection is treated as a regression problem with a single convolutional network simultaneously predicting multiple bounding boxes and class probabilities for those boxes. By performing training using chest CT scans from Lung Image Database Consortium (LIDC), NVIDIA DIGITS and Caffe deep learning framework, we show that nodule detection using this single neural network can result in reasonably low false positive rates with high sensitivity and precision.
A novel computer-aided detection system for pulmonary nodule identification in CT images
NASA Astrophysics Data System (ADS)
Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong
2014-03-01
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
[Development of a digital chest phantom for studies on energy subtraction techniques].
Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio
2014-03-01
Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.
Building confidence and credibility into CAD with belief decision trees
NASA Astrophysics Data System (ADS)
Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.
2017-03-01
Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.
Sun, Wenqing; Zheng, Bin; Qian, Wei
2017-10-01
This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.
Automatic segmentation of tumor-laden lung volumes from the LIDC database
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2012-03-01
The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.
New reversing design method for LED uniform illumination.
Wang, Kai; Wu, Dan; Qin, Zong; Chen, Fei; Luo, Xiaobing; Liu, Sheng
2011-07-04
In light-emitting diode (LED) applications, it is becoming a big issue that how to optimize light intensity distribution curve (LIDC) and design corresponding optical component to achieve uniform illumination when distance-height ratio (DHR) is given. A new reversing design method is proposed to solve this problem, including design and optimization of LIDC to achieve high uniform illumination and a new algorithm of freeform lens to generate the required LIDC by LED light source. According to this method, two new LED modules integrated with freeform lenses are successfully designed for slim direct-lit LED backlighting with thickness of 10mm, and uniformities of illuminance increase from 0.446 to 0.915 and from 0.155 to 0.887 when DHRs are 2 and 3 respectively. Moreover, the number of new LED modules dramatically decreases to 1/9 of the traditional LED modules while achieving similar uniform illumination in backlighting. Therefore, this new method provides a practical and simple way for optical design of LED uniform illumination when DHR is much larger than 1.
Osman, Onur; Ucan, Osman N.
2008-01-01
Objective The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Materials and Methods Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. Results The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Conclusion Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules. PMID:18253070
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less
Shen, S C; Li, J S; Huang, M C
2014-06-02
Fourier series and an energy mapping method were used in this study to design a lens that produces a light pattern of multiple concentric circles (LPMCC) for a light-emitting diode (LED) fishing lamp. Fourier series were used to represent the light intensity distribution curve (LIDC) of the LPMCC light pattern. Energy mapping involves performing angular energy mapping based on the LIDCs of an LED light source and LPMCC to design a freeform lens. Type I and Type II LPMCC lenses were designed according to the phototaxis behavior of fish to create a LPMCC light pattern of interleaving light-dark zones that attracts fish shoals to stay in an area for a long period. The experimental results indicated that, in comparing the LIDCs of the Type I and II lenses with the respective simulation values, the normalized cross-correlation (NCC) value reached 96%. According to a 24-hour observation of the phototaxis of Poecilia reticulata to evaluate the effectiveness of the proposed light pattern to attract fish, when a fish shoal was habituated to a light source that emitted constant illumination light, it gradually moved away from the intense light zone and hovered around the junction of the light and dark zones. In the future, the design used in this study can be applied to LED fishing lamps to replace traditional fishing lamps.
Heng, Daniel Y C; Xie, Wanling; Regan, Meredith M; Harshman, Lauren C; Bjarnason, Georg A; Vaishampayan, Ulka N; Mackenzie, Mary; Wood, Lori; Donskov, Frede; Tan, Min-Han; Rha, Sun-Young; Agarwal, Neeraj; Kollmannsberger, Christian; Rini, Brian I; Choueiri, Toni K
2014-01-01
Summary Background The International Metastatic Renal-Cell Carcinoma Database Consortium model offers prognostic information for patients with metastatic renal-cell carcinoma. We tested the accuracy of the model in an external population and compared it with other prognostic models. Methods We included patients with metastatic renal-cell carcinoma who were treated with first-line VEGF-targeted treatment at 13 international cancer centres and who were registered in the Consortium’s database but had not contributed to the initial development of the Consortium Database model. The primary endpoint was overall survival. We compared the Database Consortium model with the Cleveland Clinic Foundation (CCF) model, the International Kidney Cancer Working Group (IKCWG) model, the French model, and the Memorial Sloan-Kettering Cancer Center (MSKCC) model by concordance indices and other measures of model fit. Findings Overall, 1028 patients were included in this study, of whom 849 had complete data to assess the Database Consortium model. Median overall survival was 18·8 months (95% 17·6–21·4). The predefined Database Consortium risk factors (anaemia, thrombocytosis, neutrophilia, hypercalcaemia, Karnofsky performance status <80%, and <1 year from diagnosis to treatment) were independent predictors of poor overall survival in the external validation set (hazard ratios ranged between 1·27 and 2·08, concordance index 0·71, 95% CI 0·68–0·73). When patients were segregated into three risk categories, median overall survival was 43·2 months (95% CI 31·4–50·1) in the favourable risk group (no risk factors; 157 patients), 22·5 months (18·7–25·1) in the intermediate risk group (one to two risk factors; 440 patients), and 7·8 months (6·5–9·7) in the poor risk group (three or more risk factors; 252 patients; p<0·0001; concordance index 0·664, 95% CI 0·639–0·689). 672 patients had complete data to test all five models. The concordance index of the CCF model was 0·662 (95% CI 0·636–0·687), of the French model 0·640 (0·614–0·665), of the IKCWG model 0·668 (0·645–0·692), and of the MSKCC model 0·657 (0·632–0·682). The reported versus predicted number of deaths at 2 years was most similar in the Database Consortium model compared with the other models. Interpretation The Database Consortium model is now externally validated and can be applied to stratify patients by risk in clinical trials and to counsel patients about prognosis. PMID:23312463
Resources | Division of Cancer Prevention
Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Bülow, Thomas; Blaffert, Thomas; Dharaiya, Ekta
2009-02-01
Presence of emphysema is recognized to be one of the single most significant risk factors in risk models for the prediction of lung cancer. Therefore, an automatically computed emphysema score would be a prime candidate as an additional numerical feature for computer aided diagnosis (CADx) for indeterminate pulmonary nodules. We have applied several histogram-based emphysema scores to 460 thoracic CT scans from the IDRI CT lung image database, and analyzed the emphysema scores in conjunction with 3000 nodule malignancy ratings of 1232 pulmonary nodules made by expert observers. Despite the emphysema being a known risk factor, we have not found any impact on the readers' malignancy rating of nodules found in a patient with higher emphysema score. We have also not found any correlation between the number of expert-detected nodules in a patient and his emphysema score, or the relative craniocaudal location of the nodules and their malignancy rating. The inter-observer agreement of the expert ratings was excellent on nodule diameter (as derived from manual delineations), good for calcification, and only modest for malignancy and shape descriptions such as spiculation, lobulation, margin, etc.
NASA Astrophysics Data System (ADS)
Pathak, S. K.; Deshpande, N. J.
2007-10-01
The present scenario of the INDEST Consortium among engineering, science and technology (including astronomy and astrophysics) libraries in India is discussed. The Indian National Digital Library in Engineering Sciences & Technology (INDEST) Consortium is a major initiative of the Ministry of Human Resource Development, Government of India. The INDEST Consortium provides access to 16 full text e-resources and 7 bibliographic databases for 166 institutions as members who are taking advantage of cost effective access to premier resources in engineering, science and technology, including astronomy and astrophysics. Member institutions can access over 6500 e-journals from 1092 publishers. Out of these, over 150 e-journals are exclusively for the astronomy and physics community. The current study also presents a comparative analysis of the key features of nine major services, viz. ACM Digital Library, ASCE Journals, ASME Journals, EBSCO Databases (Business Source Premier), Elsevier's Science Direct, Emerald Full Text, IEEE/IEE Electronic Library Online (IEL), ProQuest ABI/INFORM and Springer Verlag's Link. In this paper, the limitations of this consortium are also discussed.
[Activity of NTDs Drug-discovery Research Consortium].
Namatame, Ichiji
2016-01-01
Neglected tropical diseases (NTDs) are an extremely important issue facing global health care. To improve "access to health" where people are unable to access adequate medical care due to poverty and weak healthcare systems, we have established two consortiums: the NTD drug discovery research consortium, and the pediatric praziquantel consortium. The NTD drug discovery research consortium, which involves six institutions from industry, government, and academia, as well as an international non-profit organization, is committed to developing anti-protozoan active compounds for three NTDs (Leishmaniasis, Chagas disease, and African sleeping sickness). Each participating institute will contribute their efforts to accomplish the following: selection of drug targets based on information technology, and drug discovery by three different approaches (in silico drug discovery, "fragment evolution" which is a unique drug designing method of Astellas Pharma, and phenotypic screening with Astellas' compound library). The consortium has established a brand new database (Integrated Neglected Tropical Disease Database; iNTRODB), and has selected target proteins for the in silico and fragment evolution drug discovery approaches. Thus far, we have identified a number of promising compounds that inhibit the target protein, and we are currently trying to improve the anti-protozoan activity of these compounds. The pediatric praziquantel consortium was founded in July 2012 to develop and register a new praziquantel pediatric formulation for the treatment of schistosomiasis. Astellas Pharma has been a core member in this consortium since its establishment, and has provided expertise and technology in the area of pediatric formulation development and clinical development.
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative
Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-01-01
Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods: We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative.
Won, Brian; Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-03-16
An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium.
Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew
2012-10-01
The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.
ERIC Educational Resources Information Center
Painter, Derrick
1996-01-01
Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)
Massage Therapy for Health Purposes
... Web site: www.nih.gov/health/clinicaltrials/ Cochrane Database of Systematic Reviews The Cochrane Database of Systematic ... Licensed Complementary and Alternative Healthcare Professions. Seattle, WA: Academic Consortium for Complementary and Alternative Health Care; 2009. ...
Martin, Tiphaine; Sherman, David J; Durrens, Pascal
2011-01-01
The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.
2015-01-01
We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431
ERIC Educational Resources Information Center
Berman, Paul; And Others
This first-year report of the National Effective Transfer Consortium (NETC) summarizes the progress made by the member colleges in creating standardized measures of actual and expected transfer rates and of transfer effectiveness, and establishing a database that would enable valid comparisons among NETC colleges. Following background information…
Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S
2015-01-01
We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.
Pulmonary nodule detection using a cascaded SVM classifier
NASA Astrophysics Data System (ADS)
Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias
2016-03-01
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.
Design of LED fish lighting attractors using horizontal/vertical LIDC mapping method.
Shen, S C; Huang, H J
2012-11-19
This study employs a sub-module concept to develop high-brightness light-emitting diode (HB-LED) fishing light arrays to replace traditional fishing light attractors. The horizontal/vertical (H/V) plane light intensity distribution curve (LIDC) of a LED light source are mapped to assist in the design of a non-axisymmetric lens with a fish-attracting light pattern that illuminates sufficiently large areas and alternates between bright and dark. These LED fishing light attractors are capable of attracting schools of fish toward the perimeter of the luminous zone surrounding fishing boats. Three CT2 boats (10 to 20 ton capacity) were recruited to conduct a field test for 1 y on the sea off the southwestern coast of Taiwan. Field tests show that HB-LED fishing light array installed 5 m above the boat deck illuminated a sea surface of 5 × 12 m and achieved an illuminance of 2000 lx. The test results show that the HB-LED fishing light arrays increased the mean catch of the three boats by 5% to 27%. In addition, the experimental boats consumed 15% to 17% less fuel than their counterparts.
Broglio, Steven P; McCrea, Michael; McAllister, Thomas; Harezlak, Jaroslaw; Katz, Barry; Hack, Dallas; Hainline, Brian
2017-07-01
The natural history of mild traumatic brain injury (TBI) or concussion remains poorly defined and no objective biomarker of physiological recovery exists for clinical use. The National Collegiate Athletic Association (NCAA) and the US Department of Defense (DoD) established the Concussion Assessment, Research and Education (CARE) Consortium to study the natural history of clinical and neurobiological recovery after concussion in the service of improved injury prevention, safety and medical care for student-athletes and military personnel. The objectives of this paper were to (i) describe the background and driving rationale for the CARE Consortium; (ii) outline the infrastructure of the Consortium policies, procedures, and governance; (iii) describe the longitudinal 6-month clinical and neurobiological study methodology; and (iv) characterize special considerations in the design and implementation of a multicenter trial. Beginning Fall 2014, CARE Consortium institutions have recruited and enrolled 23,533 student-athletes and military service academy students (approximately 90% of eligible student-athletes and cadets; 64.6% male, 35.4% female). A total of 1174 concussions have been diagnosed in participating subjects, with both concussion and baseline cases deposited in the Federal Interagency Traumatic Brain Injury Research (FITBIR) database. Challenges have included coordinating regulatory issues across civilian and military institutions, operationalizing study procedures, neuroimaging protocol harmonization across sites and platforms, construction and maintenance of a relational database, and data quality and integrity monitoring. The NCAA-DoD CARE Consortium represents a comprehensive investigation of concussion in student-athletes and military service academy students. The richly characterized study sample and multidimensional approach provide an opportunity to advance the field of concussion science, not only among student athletes but in all populations at risk for mild TBI.
Completion of the National Land Cover Database (NLCD) 1992-2001 Land Cover Change Retrofit Product
The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods betwe...
Completion of the 2006 National Land Cover Database Update for the Conterminous United States
Under the organization of the Multi-Resolution Land Characteristics (MRLC) Consortium, the National Land Cover Database (NLCD) has been updated to characterize both land cover and land cover change from 2001 to 2006. An updated version of NLCD 2001 (Version 2.0) is also provided....
Caputo, Sandrine; Benboudjema, Louisa; Sinilnikova, Olga; Rouleau, Etienne; Béroud, Christophe; Lidereau, Rosette
2012-01-01
BRCA1 and BRCA2 are the two main genes responsible for predisposition to breast and ovarian cancers, as a result of protein-inactivating monoallelic mutations. It remains to be established whether many of the variants identified in these two genes, so-called unclassified/unknown variants (UVs), contribute to the disease phenotype or are simply neutral variants (or polymorphisms). Given the clinical importance of establishing their status, a nationwide effort to annotate these UVs was launched by laboratories belonging to the French GGC consortium (Groupe Génétique et Cancer), leading to the creation of the UMD-BRCA1/BRCA2 databases (http://www.umd.be/BRCA1/ and http://www.umd.be/BRCA2/). These databases have been endorsed by the French National Cancer Institute (INCa) and are designed to collect all variants detected in France, whether causal, neutral or UV. They differ from other BRCA databases in that they contain co-occurrence data for all variants. Using these data, the GGC French consortium has been able to classify certain UVs also contained in other databases. In this article, we report some novel UVs not contained in the BIC database and explore their impact in cancer predisposition based on a structural approach.
Mishra, Amrita
2014-01-01
Abstract Omics research infrastructure such as databases and bio-repositories requires effective governance to support pre-competitive research. Governance includes the use of legal agreements, such as Material Transfer Agreements (MTAs). We analyze the use of such agreements in the mouse research commons, including by two large-scale resource development projects: the International Knockout Mouse Consortium (IKMC) and International Mouse Phenotyping Consortium (IMPC). We combine an analysis of legal agreements and semi-structured interviews with 87 members of the mouse model research community to examine legal agreements in four contexts: (1) between researchers; (2) deposit into repositories; (3) distribution by repositories; and (4) exchanges between repositories, especially those that are consortium members of the IKMC and IMPC. We conclude that legal agreements for the deposit and distribution of research reagents should be kept as simple and standard as possible, especially when minimal enforcement capacity and resources exist. Simple and standardized legal agreements reduce transactional bottlenecks and facilitate the creation of a vibrant and sustainable research commons, supported by repositories and databases. PMID:24552652
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
NASA Astrophysics Data System (ADS)
Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.
Call for participation in the neurogenetics consortium within the Human Variome Project.
Haworth, Andrea; Bertram, Lars; Carrera, Paola; Elson, Joanna L; Braastad, Corey D; Cox, Diane W; Cruts, Marc; den Dunnen, Johann T; Farrer, Matthew J; Fink, John K; Hamed, Sherifa A; Houlden, Henry; Johnson, Dennis R; Nuytemans, Karen; Palau, Francesc; Rayan, Dipa L Raja; Robinson, Peter N; Salas, Antonio; Schüle, Birgitt; Sweeney, Mary G; Woods, Michael O; Amigo, Jorge; Cotton, Richard G H; Sobrido, Maria-Jesus
2011-08-01
The rate of DNA variation discovery has accelerated the need to collate, store and interpret the data in a standardised coherent way and is becoming a critical step in maximising the impact of discovery on the understanding and treatment of human disease. This particularly applies to the field of neurology as neurological function is impaired in many human disorders. Furthermore, the field of neurogenetics has been proven to show remarkably complex genotype-to-phenotype relationships. To facilitate the collection of DNA sequence variation pertaining to neurogenetic disorders, we have initiated the "Neurogenetics Consortium" under the umbrella of the Human Variome Project. The Consortium's founding group consisted of basic researchers, clinicians, informaticians and database creators. This report outlines the strategic aims established at the preliminary meetings of the Neurogenetics Consortium and calls for the involvement of the wider neurogenetic community in enabling the development of this important resource.
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Scientific Use Cases for the Virtual Atomic and Molecular Data Center
NASA Astrophysics Data System (ADS)
Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.
2014-12-01
VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.
NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction
NASA Technical Reports Server (NTRS)
Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan
2004-01-01
This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta
2010-03-01
Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.
Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V
2016-10-01
The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
The Cardiac Safety Research Consortium ECG database.
Kligfield, Paul; Green, Cynthia L
2012-01-01
The Cardiac Safety Research Consortium (CSRC) ECG database was initiated to foster research using anonymized, XML-formatted, digitized ECGs with corresponding descriptive variables from placebo- and positive-control arms of thorough QT studies submitted to the US Food and Drug Administration (FDA) by pharmaceutical sponsors. The database can be expanded to other data that are submitted directly to CSRC from other sources, and currently includes digitized ECGs from patients with genotyped varieties of congenital long-QT syndrome; this congenital long-QT database is also linked to ambulatory electrocardiograms stored in the Telemetric and Holter ECG Warehouse (THEW). Thorough QT data sets are available from CSRC for unblinded development of algorithms for analysis of repolarization and for blinded comparative testing of algorithms developed for the identification of moxifloxacin, as used as a positive control in thorough QT studies. Policies and procedures for access to these data sets are available from CSRC, which has developed tools for statistical analysis of blinded new algorithm performance. A recently approved CSRC project will create a data set for blinded analysis of automated ECG interval measurements, whose initial focus will include comparison of four of the major manufacturers of automated electrocardiographs in the United States. CSRC welcomes application for use of the ECG database for clinical investigation. Copyright © 2012 Elsevier Inc. All rights reserved.
Types of Seizures Affecting Individuals with TSC
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Homer, Collin G.; Dewitz, Jon; Yang, Limin; Jin, Suming; Danielson, Patrick; Xian, George Z.; Coulston, John; Herold, Nathaniel; Wickham, James; Megown, Kevin
2015-01-01
The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associated changes. The recent release of NLCD 2011 products now represents a decade of consistently produced land cover and impervious surface for the Nation across three periods: 2001, 2006, and 2011 (Homer et al., 2007; Fry et al., 2011). Tree canopy cover has also been produced for 2011 (Coluston et al., 2012; Coluston et al., 2013). With the release of NLCD 2011, the database provides the ability to move beyond simple change detection to monitoring and trend assessments. NLCD 2011 represents the latest evolution of NLCD products, continuing its focus on consistency, production, efficiency, and product accuracy. NLCD products are designed for widespread application in biology, climate, education, land management, hydrology, environmental planning, risk and disease analysis, telecommunications and visualization, and are available for no cost at http://www.mrlc.gov. NLCD is produced by a Federal agency consortium called the Multi-Resolution Land Characteristics Consortium (MRLC) (Wickham et al., 2014). In the consortium arrangement, the U.S. Geological Survey (USGS) leads NLCD land cover and imperviousness production for the bulk of the Nation; the National Oceanic and Atmospheric Administration (NOAA) completes NLCD land cover for the conterminous U.S. (CONUS) coastal zones; and the U.S. Forest Service (USFS) designs and produces the NLCD tree canopy cover product. Other MRLC partners collaborate through resource or data contribution to ensure NLCD products meet their respective program needs (Wickham et al., 2014).
ERIC Educational Resources Information Center
Kreie, Jennifer; Hashemi, Shohreh
2012-01-01
Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…
A 30-meter spatial database for the nation's forests
Raymond L. Czaplewski
2002-01-01
The FIA vision for remote sensing originated in 1992 with the Blue Ribbon Panel on FIA, and it has since evolved into an ambitious performance target for 2003. FIA is joining a consortium of Federal agencies to map the Nation's land cover. FIA field data will help produce a seamless, standardized, national geospatial database for forests at the scale of 30-m...
Moscucci, Mauro; Share, David; Kline-Rogers, Eva; O'Donnell, Michael; Maxwell-Eward, Ann; Meengs, William L; Clark, Vivian L; Kraft, Phillip; De Franco, Anthony C; Chambers, James L; Patel, Kirit; McGinnity, John G; Eagle, Kim A
2002-10-01
The past decade has been characterized by increased scrutiny of outcomes of surgical and percutaneous coronary interventions (PCIs). This increased scrutiny has led to the development of regional, state, and national databases for outcome assessment and for public reporting. This report describes the initial development of a regional, collaborative, cardiovascular consortium and the progress made so far by this collaborative group. In 1997, a group of hospitals in the state Michigan agreed to create a regional collaborative consortium for the development of a quality improvement program in interventional cardiology. The project included the creation of a comprehensive database of PCIs to be used for risk assessment, feedback on absolute and risk-adjusted outcomes, and sharing of information. To date, information from nearly 20,000 PCIs have been collected. A risk prediction tool for death in the hospital and additional risk prediction tools for other outcomes have been developed from the data collected, and are currently used by the participating centers for risk assessment and for quality improvement. As the project enters into year 5, the participating centers are deeply engaged in the quality improvement phase, and expansion to a total of 17 hospitals with active PCI programs is in process. In conclusion, the Blue Cross Blue Shield of Michigan Cardiovascular Consortium is an example of a regional collaborative effort to assess and improve quality of care and outcomes that overcome the barriers of traditional market and academic competition.
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Samberg, Meghan E.; Cohen, Paul H.; Wysk, Richard A.; Monteiro-Riviere, Nancy A.
2012-01-01
Nanomaterials play a significant role in biomedical research and applications due to their unique biological, mechanical, and electrical properties. In recent years, they have been utilised to improve the functionality and reliability of a wide range of implantable medical devices ranging from well-established orthopaedic residual hardware devices (e.g. hip implants) that can repair defects in skeletal systems to emerging tissue engineering scaffolds that can repair or replace organ functions. This review summarizes the applications and efficacies of these nanomaterials that include synthetic or naturally occurring metals, polymers, ceramics, and composites in orthopaedic implants, the largest market segment of implantable medical devices. The importance of synergistic engineering techniques that can augment or enhance the performance of nanomaterial applications in orthopaedic implants is also discussed,, the focus being on a low intensity direct electric current (LIDC) stimulation technology to promote the long-term antibacterial efficacy of oligodynamic metal-based surfaces by ionization, while potentially accelerating tissue growth and osseointegration. While many nanomaterials have clearly demonstrated their ability to provide more effective implantable medical surfaces, further decisive investigations are necessary before they can translate into medically safe and commercially viable clinical applications. The paper concludes with a discussion about some of the critical impending issues with the application of nanomaterials-based technologies in implantable medical devices, and potential directions to address these. PMID:23335493
National Maternal and Child Oral Health Resource Center
... the Organizations Database Center for Oral Health Systems Integration and Improvement (COHSII) COHSII is a consortium promoting ... to e-mail lists Featured Resources Consensus Statement Integration Framework Bright Futures Pocket Guide Consumer Materials Special ...
Construction of 3-D Earth Models for Station Specific Path Corrections by Dynamic Ray Tracing
2001-10-01
the numerical eikonal solution method of Vidale (1988) being used by the MIT led consortium. The model construction described in this report relies...assembled. REFERENCES Barazangi, M., Fielding, E., Isacks, B. & Seber, D., (1996), Geophysical And Geological Databases And Ctbt...preprint download6). Fielding, E., Isacks, B.L., and Baragangi. M. (1992), A Network Accessible Geological and Geophysical Database for
Improvements to the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Tauxe, L.; Koppers, A. A. P.; Constable, C.; Jonestrask, L.
2015-12-01
The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of data uploading and editing, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Online data editing is now available and the need for proprietary spreadsheet software is therefore entirely negated. The data owner can change values in the database or delete entries through an HTML 5 web interface that resembles typical spreadsheets in behavior and uses. Additive uploading now allows for additions to data sets to be uploaded with a simple drag and drop interface. Searching the database has improved with the addition of more sophisticated search parameters and with the facility to use them in complex combinations. A comprehensive summary view of a search result has been added for increased quick data comprehension while a raw data view is available if one desires to see all data columns as stored in the database. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. MagIC data associated with individual contributions or from online searches may be downloaded in the tab delimited MagIC text file format for susbsequent offline use and analysis. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.
A New Interface for the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Koppers, A. A. P.; Tauxe, L.; Constable, C.; Shaar, R.; Jonestrask, L.
2014-12-01
The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of uploading data, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Data uploading has been simplified and no longer requires the use of the Excel SmartBook interface. Instead, properly formatted MagIC text files can be dragged-and-dropped onto an HTML 5 web interface. Data can be uploaded one table at a time to facilitate ease of uploading and data error checking is done online on the whole dataset at once instead of incrementally in an Excel Console. Searching the database has improved with the addition of more sophisticated search parameters and with the ability to use them in complex combinations. Searches may also be saved as permanent URLs for easy reference or for use as a citation in a publication. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. Data from the MagIC database may be downloaded from individual contributions or from online searches for offline use and analysis in the tab delimited MagIC text file format. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.
Kamitsuji, Shigeo; Matsuda, Takashi; Nishimura, Koichi; Endo, Seiko; Wada, Chisa; Watanabe, Kenji; Hasegawa, Koichi; Hishigaki, Haretsugu; Masuda, Masatoshi; Kuwahara, Yusuke; Tsuritani, Katsuki; Sugiura, Kenkichi; Kubota, Tomoko; Miyoshi, Shinji; Okada, Kinya; Nakazono, Kazuyuki; Sugaya, Yuki; Yang, Woosung; Sawamoto, Taiji; Uchida, Wataru; Shinagawa, Akira; Fujiwara, Tsutomu; Yamada, Hisaharu; Suematsu, Koji; Tsutsui, Naohisa; Kamatani, Naoyuki; Liou, Shyh-Yuh
2015-06-01
Japan Pharmacogenomics Data Science Consortium (JPDSC) has assembled a database for conducting pharmacogenomics (PGx) studies in Japanese subjects. The database contains the genotypes of 2.5 million single-nucleotide polymorphisms (SNPs) and 5 human leukocyte antigen loci from 2994 Japanese healthy volunteers, as well as 121 kinds of clinical information, including self-reports, physiological data, hematological data and biochemical data. In this article, the reliability of our data was evaluated by principal component analysis (PCA) and association analysis for hematological and biochemical traits by using genome-wide SNP data. PCA of the SNPs showed that all the samples were collected from the Japanese population and that the samples were separated into two major clusters by birthplace, Okinawa and other than Okinawa, as had been previously reported. Among 87 SNPs that have been reported to be associated with 18 hematological and biochemical traits in genome-wide association studies (GWAS), the associations of 56 SNPs were replicated using our data base. Statistical power simulations showed that the sample size of the JPDSC control database is large enough to detect genetic markers having a relatively strong association even when the case sample size is small. The JPDSC database will be useful as control data for conducting PGx studies to explore genetic markers to improve the safety and efficacy of drugs either during clinical development or in post-marketing.
Bonnie Ruefenacht; Robert Benton; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley
2015-01-01
A tree canopy cover (TCC) layer is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...
RNAcentral: an international database of ncRNA sequences
Williams, Kelly Porter
2014-10-28
The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.
NASA Astrophysics Data System (ADS)
Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.
Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie
2017-08-01
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.
Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol
Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.
2009-01-01
One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.
OPAC Missing Record Retrieval.
ERIC Educational Resources Information Center
Johnson, Karl E.
1996-01-01
When the Higher Education Library Information Network of Rhode Island transferred members' bibliographic data into a shared online public access catalog (OPAC), 10% of the University of Rhode Island's monograph records were missing. This article describes the consortium's attempts to retrieve records from the database and the effectiveness of…
Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography
NASA Astrophysics Data System (ADS)
Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin
1995-04-01
Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.
Rationale of the FIBROTARGETS study designed to identify novel biomarkers of myocardial fibrosis
Ferreira, João Pedro; Machu, Jean‐Loup; Girerd, Nicolas; Jaisser, Frederic; Thum, Thomas; Butler, Javed; González, Arantxa; Diez, Javier; Heymans, Stephane; McDonald, Kenneth; Gyöngyösi, Mariann; Firat, Hueseyin; Rossignol, Patrick; Pizard, Anne
2017-01-01
Abstract Aims Myocardial fibrosis alters the cardiac architecture favouring the development of cardiac dysfunction, including arrhythmias and heart failure. Reducing myocardial fibrosis may improve outcomes through the targeted diagnosis and treatment of emerging fibrotic pathways. The European‐Commission‐funded ‘FIBROTARGETS’ is a multinational academic and industrial consortium with the main aims of (i) characterizing novel key mechanistic pathways involved in the metabolism of fibrillary collagen that may serve as biotargets, (ii) evaluating the potential anti‐fibrotic properties of novel or repurposed molecules interfering with the newly identified biotargets, and (iii) characterizing bioprofiles based on distinct mechanistic phenotypes involving the aforementioned biotargets. These pathways will be explored by performing a systematic and collaborative search for mechanisms and targets of myocardial fibrosis. These mechanisms will then be translated into individualized diagnostic tools and specific therapeutic pharmacological options for heart failure. Methods and results The FIBROTARGETS consortium has merged data from 12 patient cohorts in a common database available to individual consortium partners. The database consists of >12 000 patients with a large spectrum of cardiovascular clinical phenotypes. It integrates community‐based population cohorts, cardiovascular risk cohorts, and heart failure cohorts. Conclusions The FIBROTARGETS biomarker programme is aimed at exploring fibrotic pathways allowing the bioprofiling of patients into specific ‘fibrotic’ phenotypes and identifying new therapeutic targets that will potentially enable the development of novel and tailored anti‐fibrotic therapies for heart failure. PMID:28988439
Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D
2016-04-01
The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.
Stanhope, Steven J; Wilken, Jason M; Pruziner, Alison L; Dearth, Christopher L; Wyatt, Marilynn; Ziemke, Gregg W; Strickland, Rachel; Milbourne, Suzanne A; Kaufman, Kenton R
2016-11-01
The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators.BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium's research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER's Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium's vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Distributed Access View Integrated Database (DAVID) system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1991-01-01
The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.
The CTSA Consortium's Catalog of Assets for Translational and Clinical Health Research (CATCHR)
Mapes, Brandy; Basford, Melissa; Zufelt, Anneliese; Wehbe, Firas; Harris, Paul; Alcorn, Michael; Allen, David; Arnim, Margaret; Autry, Susan; Briggs, Michael S.; Carnegie, Andrea; Chavis‐Keeling, Deborah; De La Pena, Carlos; Dworschak, Doris; Earnest, Julie; Grieb, Terri; Guess, Marilyn; Hafer, Nathaniel; Johnson, Tesheia; Kasper, Amanda; Kopp, Janice; Lockie, Timothy; Lombardo, Vincetta; McHale, Leslie; Minogue, Andrea; Nunnally, Beth; O'Quinn, Deanna; Peck, Kelly; Pemberton, Kieran; Perry, Cheryl; Petrie, Ginny; Pontello, Andria; Posner, Rachel; Rehman, Bushra; Roth, Deborah; Sacksteder, Paulette; Scahill, Samantha; Schieri, Lorri; Simpson, Rosemary; Skinner, Anne; Toussant, Kim; Turner, Alicia; Van der Put, Elaine; Wasser, June; Webb, Chris D.; Williams, Maija; Wiseman, Lori; Yasko, Laurel; Pulley, Jill
2014-01-01
Abstract The 61 CTSA Consortium sites are home to valuable programs and infrastructure supporting translational science and all are charged with ensuring that such investments translate quickly to improved clinical care. Catalog of Assets for Translational and Clinical Health Research (CATCHR) is the Consortium's effort to collect and make available information on programs and resources to maximize efficiency and facilitate collaborations. By capturing information on a broad range of assets supporting the entire clinical and translational research spectrum, CATCHR aims to provide the necessary infrastructure and processes to establish and maintain an open‐access, searchable database of consortium resources to support multisite clinical and translational research studies. Data are collected using rigorous, defined methods, with the resulting information made visible through an integrated, searchable Web‐based tool. Additional easy‐to‐use Web tools assist resource owners in validating and updating resource information over time. In this paper, we discuss the design and scope of the project, data collection methods, current results, and future plans for development and sustainability. With increasing pressure on research programs to avoid redundancy, CATCHR aims to make available information on programs and core facilities to maximize efficient use of resources. PMID:24456567
Enhancing Transfer Effectiveness: A Model for the 1990s.
ERIC Educational Resources Information Center
Berman, Paul; And Others
In an effort to identify effective transfer practices appropriate to different community college circumstances, and to establish a quantitative database that would enable valid comparisons of transfer between their 28 member institutions, the National Effective Transfer Consortium (NETC) sponsored a survey of more than 30,000 students attending…
The ICPSR and Social Science Research
ERIC Educational Resources Information Center
Johnson, Wendell G.
2008-01-01
The Inter-university Consortium for Political and Social Research (ICPSR), a unit within the Institute for Social Research at the University of Michigan, is the world's largest social science data archive. The data sets in the ICPRS database give the social sciences librarian/subject specialist an opportunity of providing value-added bibliographic…
The study is a consortium between the U.S. Environmental Protection Agency (National Risk Management Research Laboratory) and the U.S. Geological Survey (Baltimore and Dover). The objectives of this study are: (1) to develop a geohydrological database for paired agricultural wate...
Consortial IT Services: Collaborating To Reduce the Pain.
ERIC Educational Resources Information Center
Klonoski, Ed
The Connecticut Distance Learning Consortium (CTDLC) provides its 32 members with Information Technologies (IT) services including a portal Web site, course management software, course hosting and development, faculty training, a help desk, online assessment, and a student financial aid database. These services are supplied to two- and four-year…
The CNES Gaia Data Processing Center: A Challenge and its Solutions
NASA Astrophysics Data System (ADS)
Chaoul, Laurence; Valette, Veronique
2011-08-01
After a brief reminder of the ESA Gaia project, this paper presents the data processing consortium (DPAC) and then the CNES data processing centre (DPCC). We focus on the challenge in terms of organisational aspects, processing capabilities, databases volumetry, and how we deal with these topics.
Distributed databases for materials study of thermo-kinetic properties
NASA Astrophysics Data System (ADS)
Toher, Cormac
2015-03-01
High-throughput computational materials science provides researchers with the opportunity to rapidly generate large databases of materials properties. To rapidly add thermal properties to the AFLOWLIB consortium and Materials Project repositories, we have implemented an automated quasi-harmonic Debye model, the Automatic GIBBS Library (AGL). This enables us to screen thousands of materials for thermal conductivity, bulk modulus, thermal expansion and related properties. The search and sort functions of the online database can then be used to identify suitable materials for more in-depth study using more precise computational or experimental techniques. AFLOW-AGL source code is public domain and will soon be released within the GNU-GPL license.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.
2014-12-01
Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.
NASA Astrophysics Data System (ADS)
Heynderickx, Daniel
2012-07-01
The main objective of the SEPServer project (EU FP7 project 262773) is to produce a new tool, which greatly facilitates the investigation of solar energetic particles (SEPs) and their origin: a server providing SEP data, related electromagnetic (EM) observations and analysis methods, a comprehensive catalogue of the observed SEP events, and educational/outreach material on solar eruptions. The project is coordinated by the University of Helsinki. The project will combine data and knowledge from 11 European partners and several collaborating parties from Europe and US. The datasets provided by the consortium partners are collected in a MySQL database (using the ESA Open Data Interface under licence) on a server operated by DH Consultancy, which also hosts a web interface providing browsing, plotting and post-processing and analysis tools developed by the consortium, as well as a Solar Energetic Particle event catalogue. At this stage of the project, a prototype server has been established, which is presently undergoing testing by users inside the consortium. Using a centralized database has numerous advantages, including: homogeneous storage of the data, which eliminates the need for dataset specific file access routines once the data are ingested in the database; a homogeneous set of metadata describing the datasets on both a global and detailed level, allowing for automated access to and presentation of the various data products; standardised access to the data in different programming environments (e.g. php, IDL); elimination of the need to download data for individual data requests. SEPServer will, thus, add value to several space missions and Earth-based observations by facilitating the coordinated exploitation of and open access to SEP data and related EM observations, and promoting correct use of these data for the entire space research community. This will lead to new knowledge on the production and transport of SEPs during solar eruptions and facilitate the development of models for predicting solar radiation storms and calculation of expected fluxes/fluences of SEPs encountered by spacecraft in the interplanetary medium.
Freschi, Luca; Jeukens, Julie; Kukavica-Ibrulj, Irena; Boyle, Brian; Dupont, Marie-Josée; Laroche, Jérôme; Larose, Stéphane; Maaroufi, Halim; Fothergill, Joanne L.; Moore, Matthew; Winsor, Geoffrey L.; Aaron, Shawn D.; Barbeau, Jean; Bell, Scott C.; Burns, Jane L.; Camara, Miguel; Cantin, André; Charette, Steve J.; Dewar, Ken; Déziel, Éric; Grimwood, Keith; Hancock, Robert E. W.; Harrison, Joe J.; Heeb, Stephan; Jelsbak, Lars; Jia, Baofeng; Kenna, Dervla T.; Kidd, Timothy J.; Klockgether, Jens; Lam, Joseph S.; Lamont, Iain L.; Lewenza, Shawn; Loman, Nick; Malouin, François; Manos, Jim; McArthur, Andrew G.; McKeown, Josie; Milot, Julie; Naghra, Hardeep; Nguyen, Dao; Pereira, Sheldon K.; Perron, Gabriel G.; Pirnay, Jean-Paul; Rainey, Paul B.; Rousseau, Simon; Santos, Pedro M.; Stephenson, Anne; Taylor, Véronique; Turton, Jane F.; Waglechner, Nicholas; Williams, Paul; Thrane, Sandra W.; Wright, Gerard D.; Brinkman, Fiona S. L.; Tucker, Nicholas P.; Tümmler, Burkhard; Winstanley, Craig; Levesque, Roger C.
2015-01-01
The International Pseudomonas aeruginosa Consortium is sequencing over 1000 genomes and building an analysis pipeline for the study of Pseudomonas genome evolution, antibiotic resistance and virulence genes. Metadata, including genomic and phenotypic data for each isolate of the collection, are available through the International Pseudomonas Consortium Database (http://ipcd.ibis.ulaval.ca/). Here, we present our strategy and the results that emerged from the analysis of the first 389 genomes. With as yet unmatched resolution, our results confirm that P. aeruginosa strains can be divided into three major groups that are further divided into subgroups, some not previously reported in the literature. We also provide the first snapshot of P. aeruginosa strain diversity with respect to antibiotic resistance. Our approach will allow us to draw potential links between environmental strains and those implicated in human and animal infections, understand how patients become infected and how the infection evolves over time as well as identify prognostic markers for better evidence-based decisions on patient care. PMID:26483767
Yohda, Masafumi; Yagi, Osami; Takechi, Ayane; Kitajima, Mizuki; Matsuda, Hisashi; Miyamura, Naoaki; Aizawa, Tomoko; Nakajima, Mutsuyasu; Sunairi, Michio; Daiba, Akito; Miyajima, Takashi; Teruya, Morimi; Teruya, Kuniko; Shiroma, Akino; Shimoji, Makiko; Tamotsu, Hinako; Juan, Ayaka; Nakano, Kazuma; Aoyama, Misako; Terabayashi, Yasunobu; Satou, Kazuhito; Hirano, Takashi
2015-07-01
A Dehalococcoides-containing bacterial consortium that performed dechlorination of 0.20 mM cis-1,2-dichloroethene to ethene in 14 days was obtained from the sediment mud of the lotus field. To obtain detailed information of the consortium, the metagenome was analyzed using the short-read next-generation sequencer SOLiD 3. Matching the obtained sequence tags with the reference genome sequences indicated that the Dehalococcoides sp. in the consortium was highly homologous to Dehalococcoides mccartyi CBDB1 and BAV1. Sequence comparison with the reference sequence constructed from 16S rRNA gene sequences in a public database showed the presence of Sedimentibacter, Sulfurospirillum, Clostridium, Desulfovibrio, Parabacteroides, Alistipes, Eubacterium, Peptostreptococcus and Proteocatella in addition to Dehalococcoides sp. After further enrichment, the members of the consortium were narrowed down to almost three species. Finally, the full-length circular genome sequence of the Dehalococcoides sp. in the consortium, D. mccartyi IBARAKI, was determined by analyzing the metagenome with the single-molecule DNA sequencer PacBio RS. The accuracy of the sequence was confirmed by matching it to the tag sequences obtained by SOLiD 3. The genome is 1,451,062 nt and the number of CDS is 1566, which includes 3 rRNA genes and 47 tRNA genes. There exist twenty-eight RDase genes that are accompanied by the genes for anchor proteins. The genome exhibits significant sequence identity with other Dehalococcoides spp. throughout the genome, but there exists significant difference in the distribution RDase genes. The combination of a short-read next-generation DNA sequencer and a long-read single-molecule DNA sequencer gives detailed information of a bacterial consortium. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.
2015-12-01
Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.
Das, Raima; Ghosh, Sankar Kumar
2017-04-01
DNA repair pathway is a primary defense system that eliminates wide varieties of DNA damage. Any deficiencies in them are likely to cause the chromosomal instability that leads to cell malfunctioning and tumorigenesis. Genetic polymorphisms in DNA repair genes have demonstrated a significant association with cancer risk. Our study attempts to give a glimpse of the overall scenario of the germline polymorphisms in the DNA repair genes by taking into account of the Exome Aggregation Consortium (ExAC) database as well as the Human Gene Mutation Database (HGMD) for evaluating the disease link, particularly in cancer. It has been found that ExAC DNA repair dataset (which consists of 228 DNA repair genes) comprises 30.4% missense, 12.5% dbSNP reported and 3.2% ClinVar significant variants. 27% of all the missense variants has the deleterious SIFT score of 0.00 and 6% variants carrying the most damaging Polyphen-2 score of 1.00, thus affecting the protein structure and function. However, as per HGMD, only a fraction (1.2%) of ExAC DNA repair variants was found to be cancer-related, indicating remaining variants reported in both the databases to be further analyzed. This, in turn, may provide an increased spectrum of the reported cancer linked variants in the DNA repair genes present in ExAC database. Moreover, further in silico functional assay of the identified vital cancer-associated variants, which is essential to get their actual biological significance, may shed some lights in the field of targeted drug development in near future. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-01-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards sub-fertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of ferti...
ERIC Educational Resources Information Center
Fast, Karl V.; Campbell, D. Grant
2001-01-01
Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…
Green, Robert C; Goddard, Katrina A B; Jarvik, Gail P; Amendola, Laura M; Appelbaum, Paul S; Berg, Jonathan S; Bernhardt, Barbara A; Biesecker, Leslie G; Biswas, Sawona; Blout, Carrie L; Bowling, Kevin M; Brothers, Kyle B; Burke, Wylie; Caga-Anan, Charlisse F; Chinnaiyan, Arul M; Chung, Wendy K; Clayton, Ellen W; Cooper, Gregory M; East, Kelly; Evans, James P; Fullerton, Stephanie M; Garraway, Levi A; Garrett, Jeremy R; Gray, Stacy W; Henderson, Gail E; Hindorff, Lucia A; Holm, Ingrid A; Lewis, Michelle Huckaby; Hutter, Carolyn M; Janne, Pasi A; Joffe, Steven; Kaufman, David; Knoppers, Bartha M; Koenig, Barbara A; Krantz, Ian D; Manolio, Teri A; McCullough, Laurence; McEwen, Jean; McGuire, Amy; Muzny, Donna; Myers, Richard M; Nickerson, Deborah A; Ou, Jeffrey; Parsons, Donald W; Petersen, Gloria M; Plon, Sharon E; Rehm, Heidi L; Roberts, J Scott; Robinson, Dan; Salama, Joseph S; Scollon, Sarah; Sharp, Richard R; Shirts, Brian; Spinner, Nancy B; Tabor, Holly K; Tarczy-Hornoch, Peter; Veenstra, David L; Wagle, Nikhil; Weck, Karen; Wilfond, Benjamin S; Wilhelmsen, Kirk; Wolf, Susan M; Wynn, Julia; Yu, Joon-Ho
2016-06-02
Despite rapid technical progress and demonstrable effectiveness for some types of diagnosis and therapy, much remains to be learned about clinical genome and exome sequencing (CGES) and its role within the practice of medicine. The Clinical Sequencing Exploratory Research (CSER) consortium includes 18 extramural research projects, one National Human Genome Research Institute (NHGRI) intramural project, and a coordinating center funded by the NHGRI and National Cancer Institute. The consortium is exploring analytic and clinical validity and utility, as well as the ethical, legal, and social implications of sequencing via multidisciplinary approaches; it has thus far recruited 5,577 participants across a spectrum of symptomatic and healthy children and adults by utilizing both germline and cancer sequencing. The CSER consortium is analyzing data and creating publically available procedures and tools related to participant preferences and consent, variant classification, disclosure and management of primary and secondary findings, health outcomes, and integration with electronic health records. Future research directions will refine measures of clinical utility of CGES in both germline and somatic testing, evaluate the use of CGES for screening in healthy individuals, explore the penetrance of pathogenic variants through extensive phenotyping, reduce discordances in public databases of genes and variants, examine social and ethnic disparities in the provision of genomics services, explore regulatory issues, and estimate the value and downstream costs of sequencing. The CSER consortium has established a shared community of research sites by using diverse approaches to pursue the evidence-based development of best practices in genomic medicine. Copyright © 2016 American Society of Human Genetics. All rights reserved.
Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.
2012-01-01
In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.
Military Suicide Research Consortium
2014-10-01
increasing and decreasing (or even ceasing entirely) across different periods of time but still building on itself with each progressive episode...community from suicide. One study found that social norms, high levels of support, identification with role models , and high self-esteem help pro - tect...in follow-up. o Conducted quality control checks of clinical data . Monitored safety, adverse events for DSMB reporting. Initiated Database
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
NASA Astrophysics Data System (ADS)
Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.
2018-04-01
Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.
FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome
Sherman, Stephanie L.; Kidd, Sharon A.; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F.; Miller, Robert M.; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E.; Brown, W. Ted
2017-01-01
BACKGROUND AND OBJECTIVE Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. METHODS FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. RESULTS The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. CONCLUSIONS FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-today lives of individuals living with FXS and their families, and short-term and long-term outcomes. PMID:28814539
FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome.
Sherman, Stephanie L; Kidd, Sharon A; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F; Miller, Robert M; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E; Brown, W Ted
2017-06-01
Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-to-day lives of individuals living with FXS and their families, and short-term and long-term outcomes. Copyright © 2017 by the American Academy of Pediatrics.
Rutledge, Jonathan W; Spencer, Horace; Moreno, Mauricio A
2014-07-01
The University HealthSystem Consortium (UHC) database collects discharge information on patients treated at academic health centers throughout the United States. We sought to use this database to identify outcome predictors for patients undergoing total laryngectomy. A secondary end point was to assess the validity of the UHC's predictive risk mortality model in this cohort of patients. Retrospective review. Academic medical centers (tertiary referral centers) and their affiliate hospitals in the United States. Using the UHC discharge database, we retrieved and analyzed data for 4648 patients undergoing total laryngectomy who were discharged between October 2007 and January 2011 from all of the member institutions. Demographics, comorbidities, institutional data, and outcomes were retrieved. The length of stay and overall costs were significantly higher among female patients (P < .0001), while age was a predictor of intensive care unit stay (P = .014). The overall complication rate was higher among Asians (P = .019) and in patients with anemia and diabetes compared with other comorbidities. The average institutional case load was 1.92 cases/mo; we found an inverse correlation (R = -0.47) between the institutional case load and length of stay (P < .0001). The UHC admit mortality risk estimator was found to be an accurate predictor not only of mortality (P < .0002) but also of intensive care unit admission and complication rate (P < .0001). This study provides an overview of laryngectomy outcomes in a contemporary cohort of patients treated at academic health centers. UHC admit mortality risk is an excellent outcome predictor and a valuable tool for risk stratification in these patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo
2016-08-01
Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.
Stanhope, Steven J.; Wilken, Jason M.; Pruziner, Alison L.; Dearth, Christopher L.; Wyatt, Marilynn; Ziemke, CAPT Gregg W.; Strickland, Rachel; Milbourne, Suzanne A.; Kaufman, Kenton R.
2017-01-01
The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators. BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium’s research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER’s Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium’s vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. PMID:27849456
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
Asquith, William H.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing
2004-01-01
In the early 2000s, the Texas Department of Transportation funded several research projects to examine the unit hydrograph and rainfall hyetograph techniques for hydrologic design in Texas for the estimation of design flows for stormwater drainage systems. A research consortium comprised of Lamar University, Texas Tech University, the University of Houston, and the U.S. Geological Survey (USGS), was chosen to examine the unit hydrograph and rainfall hyetograph techniques. Rainfall and runoff data collected by the USGS at 91 streamflow-gaging stations in Texas formed a basis for the research. These data were collected as part of USGS small-watershed projects and urban watershed studies that began in the late 1950s and continued through most of the 1970s; a few gages were in operation in the mid-1980s. Selected hydrologic events from these studies were available in the form of over 220 printed reports, which offered the best aggregation of hydrologic data for the research objectives. Digital versions of the data did not exist. Therefore, significant effort was undertaken by the consortium to manually enter the data into a digital database from the printed record. The rainfall and runoff data for over 1,650 storms were entered. To enhance data integrity, considerable quality-control and quality-assurance efforts were conducted as the database was assembled and after assembly to enhance data integrity. This report documents the database and informs interested parties on its usage.
Inroads to predict in vivo toxicology-an introduction to the eTOX Project.
Briggs, Katharine; Cases, Montserrat; Heard, David J; Pastor, Manuel; Pognan, François; Sanz, Ferran; Schwab, Christof H; Steger-Hartmann, Thomas; Sutter, Andreas; Watson, David K; Wichard, Jörg D
2012-01-01
There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison ("read-across"), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX ("electronic toxicity") consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
The National Land Cover Database
Homer, Collin G.; Fry, Joyce A.; Barnes, Christopher A.
2012-01-01
The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2011-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT® for use in college admission. The first sample included first-time, first-year students entering college in fall 2006, with 110 institutions providing students'…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2006-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT®, which is used in college admission and consists of three sections: critical reading (SAT-CR), mathematics (SAT-M) and writing (SAT-W). This report builds on a body of…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2012-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT®, which consists of three sections: critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), for use in college admission. A study by Mattern and…
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations
Kim, Seung Won; Kim, Bae-Hwan
2016-01-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.
Kim, Seung Won; Kim, Bae-Hwan
2016-07-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.
Wright, Judy M; Cottrell, David J; Mir, Ghazala
2014-07-01
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database
NASA Astrophysics Data System (ADS)
Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2015-12-01
The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.
ERIC Educational Resources Information Center
Lee, Connie W.; Hinson, Tony M.
This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
NLCD tree canopy cover (TCC) maps of the contiguous United States and coastal Alaska
Robert Benton; Bonnie Ruefenacht; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley
2015-01-01
A tree canopy cover (TCC) map is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2013-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year. The sample…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2012-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The…
Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Schneider, Michel; Bansal, Parit; Bridge, Alan J; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis
2016-01-01
The Universal Protein Resource (UniProt, http://www.uniprot.org ) consortium is an initiative of the SIB Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) to provide the scientific community with a central resource for protein sequences and functional information. The UniProt consortium maintains the UniProt KnowledgeBase (UniProtKB), updated every 4 weeks, and several supplementary databases including the UniProt Reference Clusters (UniRef) and the UniProt Archive (UniParc).The Swiss-Prot section of the UniProt KnowledgeBase (UniProtKB/Swiss-Prot) contains publicly available expertly manually annotated protein sequences obtained from a broad spectrum of organisms. Plant protein entries are produced in the frame of the Plant Proteome Annotation Program (PPAP), with an emphasis on characterized proteins of Arabidopsis thaliana and Oryza sativa. High level annotations provided by UniProtKB/Swiss-Prot are widely used to predict annotation of newly available proteins through automatic pipelines.The purpose of this chapter is to present a guided tour of a UniProtKB/Swiss-Prot entry. We will also present some of the tools and databases that are linked to each entry.
Pan, Min; Cong, Peikuan; Wang, Yue; Lin, Changsong; Yuan, Ying; Dong, Jian; Banerjee, Santasree; Zhang, Tao; Chen, Yanling; Zhang, Ting; Chen, Mingqing; Hu, Peter; Zheng, Shu; Zhang, Jin; Qi, Ming
2011-12-01
The Human Variome Project (HVP) is an international consortium of clinicians, geneticists, and researchers from over 30 countries, aiming to facilitate the establishment and maintenance of standards, systems, and infrastructure for the worldwide collection and sharing of all genetic variations effecting human disease. The HVP-China Node will build new and supplement existing databases of genetic diseases. As the first effort, we have created a novel variant database of BRCA1 and BRCA2, mismatch repair genes (MMR), and APC genes for breast cancer, Lynch syndrome, and familial adenomatous polyposis (FAP), respectively, in the Chinese population using the Leiden Open Variation Database (LOVD) format. We searched PubMed and some Chinese search engines to collect all the variants of these genes in the Chinese population that have already been detected and reported. There are some differences in the gene variants between the Chinese population and that of other ethnicities. The database is available online at http://www.genomed.org/LOVD/. Our database will appear to users who survey other LOVD databases (e.g., by Google search, or by NCBI GeneTests search). Remote submissions are accepted, and the information is updated monthly. © 2011 Wiley Periodicals, Inc.
The IntAct molecular interaction database in 2012
Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning
2012-01-01
IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
Proceedings -- US Russian workshop on fuel cell technologies (in English;Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, B.; Sylwester, A.
1996-04-01
On September 26--28, 1995, Sandia National Laboratories sponsored the first Joint US/Russian Workshop on Fuel Cell Technology at the Marriott Hotel in Albuquerque, New Mexico. This workshop brought together the US and Russian fuel cell communities as represented by users, producers, R and D establishments and government agencies. Customer needs and potential markets in both countries were discussed to establish a customer focus for the workshop. Parallel technical sessions defined research needs and opportunities for collaboration to advance fuel cell technology. A desired outcome of the workshop was the formation of a Russian/American Fuel Cell Consortium to advance fuel cellmore » technology for application in emerging markets in both countries. This consortium is envisioned to involve industry and national labs in both countries. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
Bannasch, Detlev; Mehrle, Alexander; Glatting, Karl-Heinz; Pepperkok, Rainer; Poustka, Annemarie; Wiemann, Stefan
2004-01-01
We have implemented LIFEdb (http://www.dkfz.de/LIFEdb) to link information regarding novel human full-length cDNAs generated and sequenced by the German cDNA Consortium with functional information on the encoded proteins produced in functional genomics and proteomics approaches. The database also serves as a sample-tracking system to manage the process from cDNA to experimental read-out and data interpretation. A web interface enables the scientific community to explore and visualize features of the annotated cDNAs and ORFs combined with experimental results, and thus helps to unravel new features of proteins with as yet unknown functions. PMID:14681468
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Standardizing the nomenclature of Martian impact crater ejecta morphologies
Barlow, Nadine G.; Boyce, Joseph M.; Costard, Francois M.; Craddock, Robert A.; Garvin, James B.; Sakimoto, Susan E.H.; Kuzmin, Ruslan O.; Roddy, David J.; Soderblom, Laurence A.
2000-01-01
The Mars Crater Morphology Consortium recommends the use of a standardized nomenclature system when discussing Martian impact crater ejecta morphologies. The system utilizes nongenetic descriptors to identify the various ejecta morphologies seen on Mars. This system is designed to facilitate communication and collaboration between researchers. Crater morphology databases will be archived through the U.S. Geological Survey in Flagstaff, where a comprehensive catalog of Martian crater morphologic information will be maintained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases
Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning
2014-01-01
IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451
LMSD: LIPID MAPS structure database
Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar
2007-01-01
The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933
Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko
2014-07-01
TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Sleep atlas and multimedia database.
Penzel, T; Kesper, K; Mayer, G; Zulley, J; Peter, J H
2000-01-01
The ENN sleep atlas and database was set up on a dedicated server connected to the internet thus providing all services such as WWW, ftp and telnet access. The database serves as a platform to promote the goals of the European Neurological Network, to exchange patient cases for second opinion between experts and to create a case-oriented multimedia sleep atlas with descriptive text, images and video-clips of all known sleep disorders. The sleep atlas consists of a small public and a large private part for members of the consortium. 20 patient cases were collected and presented with educational information similar to published case reports. Case reports are complemented with images, video-clips and biosignal recordings. A Java based viewer for biosignals provided in EDF format was installed in order to move free within the sleep recordings without the need to download the full recording on the client.
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine
Elsik, Christine G.; Tayal, Aditi; Diesh, Colin M.; Unni, Deepak R.; Emery, Marianne L.; Nguyen, Hung N.; Hagen, Darren E.
2016-01-01
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. PMID:26578564
Building An Integrated Neurodegenerative Disease Database At An Academic Health Center
Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.
2010-01-01
Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346
Rudnick, Paul A.; Markey, Sanford P.; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V.; Edwards, Nathan J.; Thangudu, Ratna R.; Ketchum, Karen A.; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E.
2016-01-01
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics datasets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and non-reference markers of cancer. The CPTAC labs have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these datasets were produced from 2D LC-MS/MS analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) Peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false discovery rate (FDR)-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the datasets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level (“rolled-up”) precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ™. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data, enabling comparisons between different samples and cancer types as well as across the major ‘omics fields. PMID:26860878
Rudnick, Paul A; Markey, Sanford P; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V; Edwards, Nathan J; Thangudu, Ratna R; Ketchum, Karen A; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E
2016-03-04
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics data sets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and nonreference markers of cancer. The CPTAC laboratories have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these data sets were produced from 2D liquid chromatography-tandem mass spectrometry analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false-discovery rate-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the data sets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level ("rolled-up") precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data to enable comparisons between different samples and cancer types as well as across the major omics fields.
Infrastructure resources for clinical research in amyotrophic lateral sclerosis.
Sherman, Alexander V; Gubitz, Amelie K; Al-Chalabi, Ammar; Bedlack, Richard; Berry, James; Conwit, Robin; Harris, Brent T; Horton, D Kevin; Kaufmann, Petra; Leitner, Melanie L; Miller, Robert; Shefner, Jeremy; Vonsattel, Jean Paul; Mitsumoto, Hiroshi
2013-05-01
Clinical trial networks, shared clinical databases, and human biospecimen repositories are examples of infrastructure resources aimed at enhancing and expediting clinical and/or patient oriented research to uncover the etiology and pathogenesis of amyotrophic lateral sclerosis (ALS), a rapidly progressive neurodegenerative disease that leads to the paralysis of voluntary muscles. The current status of such infrastructure resources, as well as opportunities and impediments, were discussed at the second Tarrytown ALS meeting held in September 2011. The discussion focused on resources developed and maintained by ALS clinics and centers in North America and Europe, various clinical trial networks, U.S. government federal agencies including the National Institutes of Health (NIH), the Agency for Toxic Substances and Disease Registry (ATSDR) and the Centers for Disease Control and Prevention (CDC), and several voluntary disease organizations that support ALS research activities. Key recommendations included 1) the establishment of shared databases among individual ALS clinics to enhance the coordination of resources and data analyses; 2) the expansion of quality-controlled human biospecimen banks; and 3) the adoption of uniform data standards, such as the recently developed Common Data Elements (CDEs) for ALS clinical research. The value of clinical trial networks such as the Northeast ALS (NEALS) Consortium and the Western ALS (WALS) Consortium was recognized, and strategies to further enhance and complement these networks and their research resources were discussed.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.
2006-12-01
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. Additionally, a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include over 40 papers from studies on the Hawaiian Islands and Mexico, data compilations from archeomagnetic studies and updates to the lake sediment dataset.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courteau, J.
1991-10-11
Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less
1989-10-01
Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to
2013-09-30
profiles of right whales Eubalaena glacialis from the North Atlantic Right Whale Consortium; 2) DNA profiles of sperm whales Physeter macrocephalus...of other cetacean databases in Wildbook format (e.g., North Atlantic right whales, sperm whales and Hector’s dolphins); 8) Supported continuing...of sperm whales, using samples collected during the 5-year Voyage of the Odyssey; and 3) DNA profiles of Hector’s dolphins from Cloudy Bay, New
CFD Aerothermodynamic Characterization Of The IXV Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Roncioni, P.; Ranuzzi, G.; Marini, M.; Battista, F.; Rufolo, G. C.
2011-05-01
In this paper, and in the framework of the ESA technical assistance activities for IXV project, the numerical activities carried out by ASI/CIRA to support the development of Aerodynamic and Aerothermodynamic databases, independent from the ones developed by the IXV Industrial consortium, are reported. A general characterization of the IXV aerothermodynamic environment has been also provided for cross checking and verification purposes. The work deals with the first year activities of Technical Assistance Contract agreed between the Italian Space Agency/CIRA and ESA.
Building an integrated neurodegenerative disease database at an academic health center.
Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q
2011-07-01
It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
FANTOM5 CAGE profiles of human and mouse reprocessed for GRCh38 and GRCm38 genome assemblies.
Abugessaisa, Imad; Noguchi, Shuhei; Hasegawa, Akira; Harshbarger, Jayson; Kondo, Atsushi; Lizio, Marina; Severin, Jessica; Carninci, Piero; Kawaji, Hideya; Kasukawa, Takeya
2017-08-29
The FANTOM5 consortium described the promoter-level expression atlas of human and mouse by using CAGE (Cap Analysis of Gene Expression) with single molecule sequencing. In the original publications, GRCh37/hg19 and NCBI37/mm9 assemblies were used as the reference genomes of human and mouse respectively; later, the Genome Reference Consortium released newer genome assemblies GRCh38/hg38 and GRCm38/mm10. To increase the utility of the atlas in forthcoming researches, we reprocessed the data to make them available on the recent genome assemblies. The data include observed frequencies of transcription starting sites (TSSs) based on the realignment of CAGE reads, and TSS peaks that are converted from those based on the previous reference. Annotations of the peak names were also updated based on the latest public databases. The reprocessed results enable us to examine frequencies of transcription initiations on the recent genome assemblies and to refer promoters with updated information across the genome assemblies consistently.
Treu, Laura; Kougias, Panagiotis G; Campanaro, Stefano; Bassani, Ilaria; Angelidaki, Irini
2016-09-01
This research aimed to better characterize the biogas microbiome by means of high throughput metagenomic sequencing and to elucidate the core microbial consortium existing in biogas reactors independently from the operational conditions. Assembly of shotgun reads followed by an established binning strategy resulted in the highest, up to now, extraction of microbial genomes involved in biogas producing systems. From the 236 extracted genome bins, it was remarkably found that the vast majority of them could only be characterized at high taxonomic levels. This result confirms that the biogas microbiome is comprised by a consortium of unknown species. A comparative analysis between the genome bins of the current study and those extracted from a previous metagenomic assembly demonstrated a similar phylogenetic distribution of the main taxa. Finally, this analysis led to the identification of a subset of common microbes that could be considered as the core essential group in biogas production. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assembly: a resource for assembled genomes at NCBI
Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi
2016-01-01
The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580
Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John
2014-01-01
The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.
The role of expert searching in the Family Physicians' Inquiries Network (FPIN)*
Ward, Deborah; Meadows, Susan E.; Nashelsky, Joan E.
2005-01-01
Objective: This article describes the contributions of medical librarians, as members of the Family Physicians' Inquiries Network (FPIN), to the creation of a database of clinical questions and answers that allows family physicians to practice evidence-based medicine using high-quality information at the point of care. The medical librarians have contributed their evidence-based search expertise and knowledge of information systems that support the processes and output of the consortium. Methods: Since its inception, librarians have been included as valued members of the FPIN community. FPIN recognizes the search expertise of librarians, and each FPIN librarian must meet qualifications demonstrating appropriate experience and training in evidence-based medicine. The consortium works collaboratively to produce the Clinical Inquiries series published in family medicine publications. Results: Over 170 Clinical Inquiries have appeared in Journal of Family Practice (JFP) and American Family Physician (AFP). Surveys have shown that this series has become the most widely read part of the JFP Website. As a result, FPIN has formalized specific librarian roles that have helped build the organizational infrastructure. Conclusions: All of the activities of the consortium are highly collaborative, and the librarian community reflects that. The FPIN librarians are valuable and equal contributors to the process of creating, updating, and maintaining high-quality clinical information for practicing primary care physicians. Of particular value is the skill of expert searching that the librarians bring to FPIN's products. PMID:15685280
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P
2016-08-05
Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.
Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.
Joseph, Jacob R; Smith, Brandon W; Park, Paul
2017-01-01
National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
The Magnetics Information Consortium (MagIC)
NASA Astrophysics Data System (ADS)
Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.
2003-12-01
The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.
Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues
NASA Astrophysics Data System (ADS)
Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen
2014-06-01
Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.
GlycomeDB – integration of open-access carbohydrate structure databases
Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm
2008-01-01
Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830
Remote sensing and GIS technology in the Global Land Ice Measurements from Space (GLIMS) Project
Raup, B.; Kääb, Andreas; Kargel, J.S.; Bishop, M.P.; Hamilton, G.; Lee, E.; Paul, F.; Rau, F.; Soltesz, D.; Khalsa, S.J.S.; Beedle, M.; Helm, C.
2007-01-01
Global Land Ice Measurements from Space (GLIMS) is an international consortium established to acquire satellite images of the world's glaciers, analyze them for glacier extent and changes, and to assess these change data in terms of forcings. The consortium is organized into a system of Regional Centers, each of which is responsible for glaciers in their region of expertise. Specialized needs for mapping glaciers in a distributed analysis environment require considerable work developing software tools: terrain classification emphasizing snow, ice, water, and admixtures of ice with rock debris; change detection and analysis; visualization of images and derived data; interpretation and archival of derived data; and analysis to ensure consistency of results from different Regional Centers. A global glacier database has been designed and implemented at the National Snow and Ice Data Center (Boulder, CO); parameters have been expanded from those of the World Glacier Inventory (WGI), and the database has been structured to be compatible with (and to incorporate) WGI data. The project as a whole was originated, and has been coordinated by, the US Geological Survey (Flagstaff, AZ), which has also led the development of an interactive tool for automated analysis and manual editing of glacier images and derived data (GLIMSView). This article addresses remote sensing and Geographic Information Science techniques developed within the framework of GLIMS in order to fulfill the goals of this distributed project. Sample applications illustrating the developed techniques are also shown. ?? 2006 Elsevier Ltd. All rights reserved.
Photovoltaic Manufacturing Consortium (PVMC) – Enabling America’s Solar Revolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metacarpa, David
The U.S. Photovoltaic Manufacturing Consortium (US-PVMC) is an industry-led consortium which was created with the mission to accelerate the research, development, manufacturing, field testing, commercialization, and deployment of next-generation solar photovoltaic technologies. Formed as part of the U.S. Department of Energy's (DOE) SunShot initiative, and headquartered in New York State, PVMC is managed by the State University of New York Polytechnic Institute (SUNY Poly) at the Colleges of Nanoscale Science and Engineering. PVMC is a hybrid of industry-led consortium and manufacturing development facility, with capabilities for collaborative and proprietary industry engagement. Through its technology development programs, advanced manufacturing development facilities,more » system demonstrations, and reliability and testing capabilities, PVMC has demonstrated itself to be a recognized proving ground for innovative solar technologies and system designs. PVMC comprises multiple locations, with the core manufacturing and deployment support activities conducted at the Solar Energy Development Center (SEDC), and the core Si wafering and metrology technologies being headed out of the University of Central Florida. The SEDC provides a pilot line for proof-of-concept prototyping, offering critical opportunities to demonstrate emerging concepts in PV manufacturing, such as evaluations of innovative materials, system components, and PV system designs. The facility, located in Halfmoon NY, encompasses 40,000 square feet of dedicated PV development space. The infrastructure and capabilities housed at PVMC includes PV system level testing at the Prototype Demonstration Facility (PDF), manufacturing scale cell & module fabrication at the Manufacturing Development Facility (MDF), cell and module testing, reliability equipment on its PV pilot line, all integrated with a PV performance database and analytical characterizations for PVMC and its partners test and commercial arrays. Additional development and deployment support are also housed at the SEDC, such as cost modeling and cost model based development activities for PV and thin film modules, components, and system level designs for reduced LCOE through lower installation hardware costs, labor reductions, soft costs and reduced operations and maintenance costs. The progression of the consortium activities started with infrastructure and capabilities build out focused on CIGS thin film photovoltaics, with a particular focus on flexible cell and module production. As marketplace changes and partners objectives shifted, the consortium shifted heavily towards deployment and market pull activities including Balance of System, cost modeling, and installation cost reduction efforts along with impacts to performance and DER operational costs. The consortium consisted of a wide array of PV supply chain companies from equipment and component suppliers through national developers and installers with a particular focus on commercial scale deployments (typically 25 to 2MW installations). With DOE funding ending after the fifth budget period, the advantages and disadvantages of such a consortium is detailed along with potential avenues for self-sustainability is reviewed.« less
National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).
Significance of genome-wide association studies in molecular anthropology.
Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal
2009-12-01
The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine.
Elsik, Christine G; Tayal, Aditi; Diesh, Colin M; Unni, Deepak R; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E
2016-01-04
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.; Genevey, A.; Delaney, R.; Baker, P.; Sbarbori, E.
2005-12-01
The Magnetics Information Consortium (MagIC) operates an online relational database including both rock and paleomagnetic data. The goal of MagIC is to store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. These nodes provide basic search capabilities based on location, reference, methods applied, material type and geological age, while allowing the user to drill down from sites all the way to the measurements. At each stage, the data can be saved and, if the available data supports it, the data can be visualized by plotting equal area plots, VGP location maps or typical Zijderveld, hysteresis, FORC, and various magnetization and remanence diagrams. All plots are made in SVG (scalable vector graphics) and thus can be saved and easily read into the user's favorite graphics programs without loss of resolution. User contributions to the MagIC database are critical to achieve a useful research tool. We have developed a standard data and metadata template (version 1.6) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate easy population of these templates within Microsoft Excel. These tools allow for the import/export of text files and they provide advanced functionality to manage/edit the data, and to perform various internal checks to high grade the data and to make them ready for uploading. The uploading is all done online by using the MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm that takes only a few minutes to process a contribution of approximately 5,000 data records. After uploading these standardized MagIC template files will be stored in the digital archives of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. In addition to that a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include ~40 papers from studies on the Hawaiian Islands, data compilations from archeomagnetic studies and updates to the lake sediment dataset.
The RECONS 25 Parsec Database: Who Are the Stars? Where Are the Planets?
NASA Astrophysics Data System (ADS)
Henry, Todd J.; Dieterich, S.; Hosey, A. D.; Ianna, P. A.; Jao, W.; Koerner, D. W.; Riedel, A. R.; Slatten, K. J.; Subasavage, J.; Winters, J. G.; RECONS
2013-01-01
Since 1994, RECONS (www.recons.org, REsearch Consortium On Nearby Stars) has been discovering and characterizing the Sun's neighbors. Nearby stars provide increased fluxes, larger astrometric perturbations, and higher probabilities for eventual resolution and detailed study of planets than similar stars at larger distances. Examination of the nearby stellar sample will reveal the prevalence and structure of solar systems, as well as the balance of Jovian and terrestrial worlds. These are the stars and planets that will ultimately be key in our search for life elsewhere. Here we outline what we know ... and what we don't know ... about the population of the nearest stars. We have expanded the original RECONS 10 pc horizon to 25 pc and are constructing a database that currently includes 2124 systems. By using the CTIO 0.9m telescope --- now operated by RECONS as part of the SMARTS Consortium --- we have published the first accurate parallaxes for 149 systems within 25 pc and currently have an additional 213 unpublished systems to add. Still, we predict that roughly two-thirds of the systems within 25 pc do not yet have accurate distance measurements. In addition to revealing the Sun's stellar neighbors, we have been using astrometric techniques to search for massive planets orbiting roughly 200 of the nearest red dwarfs. Unlike radial velocity searches, our astrometric effort is most sensitive to Jovian planets in Jovian orbits, i.e. those that span decades. We have now been monitoring stars for up to 13 years with positional accuracies of a few milliarcseconds per night. We have detected stellar and brown dwarf companions, as well as enigmatic, unseen secondaries, but have yet to reveal a single super-Jupiter ... a somewhat surprising result. In total, only 3% of stars within 25 pc are known to possess planets. It seems clear that we have a great deal of work to do to map out the stars, planets, and perhaps life in the solar neighborhood. This effort is supported by the NSF through grant AST-0908402 and via observations made possible by the SMARTS Consortium.
Accrediting osteopathic postdoctoral training institutions.
Duffy, Thomas
2011-04-01
All postdoctoral training programs approved by the American Osteopathic Association are required to be part of an Osteopathic Postdoctoral Training Institution (OPTI) consortium. The author reviews recent activities related to OPTI operations, including the transfer the OPTI Annual Report to an electronic database, revisions to the OPTI Accreditation Handbook, training at the 2010 OPTI Workshop, and new requirements of the American Osteopathic Association Commission on Osteopathic College Accreditation. The author also reviews the OPTI accreditation process, cites common commendations and deficiencies for reviews completed from 2008 to 2010, and provides an overview of plans for future improvements.
Ahmadi, Farshid Farnood; Ebadi, Hamid
2009-01-01
3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.
MODBASE, a database of annotated comparative protein structure models
Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej
2002-01-01
MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309
Classification of pulmonary nodules in lung CT images using shape and texture features
NASA Astrophysics Data System (ADS)
Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla
2016-03-01
Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.
Enhancing AFLOW Visualization using Jmol
NASA Astrophysics Data System (ADS)
Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration
The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.
2017-12-01
Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The toolmore » is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.« less
Javid, Patrick J; Oron, Assaf P; Duggan, Christopher; Squires, Robert H; Horslen, Simon P
2017-09-05
The advent of regional multidisciplinary intestinal rehabilitation programs has been associated with improved survival in pediatric intestinal failure. Yet, the optimal timing of referral for intestinal rehabilitation remains unknown. We hypothesized that the degree of intestinal failure-associated liver disease (IFALD) at initiation of intestinal rehabilitation would be associated with overall outcome. The multicenter, retrospective Pediatric Intestinal Failure Consortium (PIFCon) database was used to identify all subjects with baseline bilirubin data. Conjugated bilirubin (CBili) was used as a marker for IFALD, and we stratified baseline bilirubin values as CBili<2 mg/dL, CBili 2-4 mg/dL, and CBili>4 mg/dL. The association between baseline CBili and mortality was examined using Cox proportional hazards regression. Of 272 subjects in the database, 191 (70%) children had baseline bilirubin data collected. 38% and 28% of patients had CBili >4 mg/dL and CBili <2 mg/dL, respectively, at baseline. All-cause mortality was 23%. On univariate analysis, mortality was associated with CBili 2-4 mg/dL, CBili >4 mg/dL, prematurity, race, and small bowel atresia. On regression analysis controlling for age, prematurity, and diagnosis, the risk of mortality was increased by 3-fold for baseline CBili 2-4 mg/dL (HR 3.25 [1.07-9.92], p=0.04) and 4-fold for baseline CBili >4 mg/dL (HR 4.24 [1.51-11.92], p=0.006). On secondary analysis, CBili >4 mg/dL at baseline was associated with a lower chance of attaining enteral autonomy. In children with intestinal failure treated at intestinal rehabilitation programs, more advanced IFALD at referral is associated with increased mortality and decreased prospect of attaining enteral autonomy. Early referral of children with intestinal failure to intestinal rehabilitation programs should be strongly encouraged. Treatment Study, Level III. Copyright © 2017 Elsevier Inc. All rights reserved.
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-01-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685
Enriching public descriptions of marine phages using the Genomic Standards Consortium MIGS standard
Duhaime, Melissa Beth; Kottmann, Renzo; Field, Dawn; Glöckner, Frank Oliver
2011-01-01
In any sequencing project, the possible depth of comparative analysis is determined largely by the amount and quality of the accompanying contextual data. The structure, content, and storage of this contextual data should be standardized to ensure consistent coverage of all sequenced entities and facilitate comparisons. The Genomic Standards Consortium (GSC) has developed the “Minimum Information about Genome/Metagenome Sequences (MIGS/MIMS)” checklist for the description of genomes and here we annotate all 30 publicly available marine bacteriophage sequences to the MIGS standard. These annotations build on existing International Nucleotide Sequence Database Collaboration (INSDC) records, and confirm, as expected that current submissions lack most MIGS fields. MIGS fields were manually curated from the literature and placed in XML format as specified by the Genomic Contextual Data Markup Language (GCDML). These “machine-readable” reports were then analyzed to highlight patterns describing this collection of genomes. Completed reports are provided in GCDML. This work represents one step towards the annotation of our complete collection of genome sequences and shows the utility of capturing richer metadata along with raw sequences. PMID:21677864
Hoffman, James M; Dunnenberger, Henry M; Kevin Hicks, J; Caudle, Kelly E; Whirl Carrillo, Michelle; Freimuth, Robert R; Williams, Marc S; Klein, Teri E; Peterson, Josh F
2016-07-01
To move beyond a select few genes/drugs, the successful adoption of pharmacogenomics into routine clinical care requires a curated and machine-readable database of pharmacogenomic knowledge suitable for use in an electronic health record (EHR) with clinical decision support (CDS). Recognizing that EHR vendors do not yet provide a standard set of CDS functions for pharmacogenetics, the Clinical Pharmacogenetics Implementation Consortium (CPIC) Informatics Working Group is developing and systematically incorporating a set of EHR-agnostic implementation resources into all CPIC guidelines. These resources illustrate how to integrate pharmacogenomic test results in clinical information systems with CDS to facilitate the use of patient genomic data at the point of care. Based on our collective experience creating existing CPIC resources and implementing pharmacogenomics at our practice sites, we outline principles to define the key features of future knowledge bases and discuss the importance of these knowledge resources for pharmacogenomics and ultimately precision medicine. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina
2009-01-01
Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.
Legal Medicine Information System using CDISC ODM.
Kiuchi, Takahiro; Yoshida, Ken-ichi; Kotani, Hirokazu; Tamaki, Keiji; Nagai, Hisashi; Harada, Kazuki; Ishikawa, Hirono
2013-11-01
We have developed a new database system for forensic autopsies, called the Legal Medicine Information System, using the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM). This system comprises two subsystems, namely the Institutional Database System (IDS) located in each institute and containing personal information, and the Central Anonymous Database System (CADS) located in the University Hospital Medical Information Network Center containing only anonymous information. CDISC ODM is used as the data transfer protocol between the two subsystems. Using the IDS, forensic pathologists and other staff can register and search for institutional autopsy information, print death certificates, and extract data for statistical analysis. They can also submit anonymous autopsy information to the CADS semi-automatically. This reduces the burden of double data entry, the time-lag of central data collection, and anxiety regarding legal and ethical issues. Using the CADS, various studies on the causes of death can be conducted quickly and easily, and the results can be used to prevent similar accidents, diseases, and abuse. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brazil, L.
2017-12-01
The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through open source tools that are developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through collection efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, CUAHSI-supported data tools have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing educational material, and the resources available to learn more.
Kessler, Michael D.; Yerges-Armstrong, Laura; Taub, Margaret A.; Shetty, Amol C.; Maloney, Kristin; Jeng, Linda Jo Bone; Ruczinski, Ingo; Levin, Albert M.; Williams, L. Keoki; Beaty, Terri H.; Mathias, Rasika A.; Barnes, Kathleen C.; Boorgula, Meher Preethi; Campbell, Monica; Chavan, Sameer; Ford, Jean G.; Foster, Cassandra; Gao, Li; Hansel, Nadia N.; Horowitz, Edward; Huang, Lili; Ortiz, Romina; Potee, Joseph; Rafaels, Nicholas; Scott, Alan F.; Vergara, Candelaria; Gao, Jingjing; Hu, Yijuan; Johnston, Henry Richard; Qin, Zhaohui S.; Padhukasahasram, Badri; Dunston, Georgia M.; Faruque, Mezbah U.; Kenny, Eimear E.; Gietzen, Kimberly; Hansen, Mark; Genuario, Rob; Bullis, Dave; Lawley, Cindy; Deshpande, Aniket; Grus, Wendy E.; Locke, Devin P.; Foreman, Marilyn G.; Avila, Pedro C.; Grammer, Leslie; Kim, Kwang-YounA; Kumar, Rajesh; Schleimer, Robert; Bustamante, Carlos; De La Vega, Francisco M.; Gignoux, Chris R.; Shringarpure, Suyash S.; Musharoff, Shaila; Wojcik, Genevieve; Burchard, Esteban G.; Eng, Celeste; Gourraud, Pierre-Antoine; Hernandez, Ryan D.; Lizee, Antoine; Pino-Yanes, Maria; Torgerson, Dara G.; Szpiech, Zachary A.; Torres, Raul; Nicolae, Dan L.; Ober, Carole; Olopade, Christopher O.; Olopade, Olufunmilayo; Oluwole, Oluwafemi; Arinola, Ganiyu; Song, Wei; Abecasis, Goncalo; Correa, Adolfo; Musani, Solomon; Wilson, James G.; Lange, Leslie A.; Akey, Joshua; Bamshad, Michael; Chong, Jessica; Fu, Wenqing; Nickerson, Deborah; Reiner, Alexander; Hartert, Tina; Ware, Lorraine B.; Bleecker, Eugene; Meyers, Deborah; Ortega, Victor E.; Pissamai, Maul R. N.; Trevor, Maul R. N.; Watson, Harold; Araujo, Maria Ilma; Oliveira, Ricardo Riccio; Caraballo, Luis; Marrugo, Javier; Martinez, Beatriz; Meza, Catherine; Ayestas, Gerardo; Herrera-Paz, Edwin Francisco; Landaverde-Torres, Pamela; Erazo, Said Omar Leiva; Martinez, Rosella; Mayorga, Alvaro; Mayorga, Luis F.; Mejia-Mejia, Delmy-Aracely; Ramos, Hector; Saenz, Allan; Varela, Gloria; Vasquez, Olga Marina; Ferguson, Trevor; Knight-Madden, Jennifer; Samms-Vaughan, Maureen; Wilks, Rainford J.; Adegnika, Akim; Ateba-Ngoa, Ulysse; Yazdanbakhsh, Maria; O'Connor, Timothy D.
2016-01-01
To characterize the extent and impact of ancestry-related biases in precision genomic medicine, we use 642 whole-genome sequences from the Consortium on Asthma among African-ancestry Populations in the Americas (CAAPA) project to evaluate typical filters and databases. We find significant correlations between estimated African ancestry proportions and the number of variants per individual in all variant classification sets but one. The source of these correlations is highlighted in more detail by looking at the interaction between filtering criteria and the ClinVar and Human Gene Mutation databases. ClinVar's correlation, representing African ancestry-related bias, has changed over time amidst monthly updates, with the most extreme switch happening between March and April of 2014 (r=0.733 to r=−0.683). We identify 68 SNPs as the major drivers of this change in correlation. As long as ancestry-related bias when using these clinical databases is minimally recognized, the genetics community will face challenges with implementation, interpretation and cost-effectiveness when treating minority populations. PMID:27725664
MetaboLights: towards a new COSMOS of metabolomics data management.
Steinbeck, Christoph; Conesa, Pablo; Haug, Kenneth; Mahendraker, Tejasvi; Williams, Mark; Maguire, Eamonn; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Salek, Reza M; Griffin, Julian L
2012-10-01
Exciting funding initiatives are emerging in Europe and the US for metabolomics data production, storage, dissemination and analysis. This is based on a rich ecosystem of resources around the world, which has been build during the past ten years, including but not limited to resources such as MassBank in Japan and the Human Metabolome Database in Canada. Now, the European Bioinformatics Institute has launched MetaboLights, a database for metabolomics experiments and the associated metadata (http://www.ebi.ac.uk/metabolights). It is the first comprehensive, cross-species, cross-platform metabolomics database maintained by one of the major open access data providers in molecular biology. In October, the European COSMOS consortium will start its work on Metabolomics data standardization, publication and dissemination workflows. The NIH in the US is establishing 6-8 metabolomics services cores as well as a national metabolomics repository. This communication reports about MetaboLights as a new resource for Metabolomics research, summarises the related developments and outlines how they may consolidate the knowledge management in this third large omics field next to proteomics and genomics.
Biomedical science journals in the Arab world.
Tadmouri, Ghazi O
2004-10-01
Medieval Arab scientists established the basis of medical practice and gave important attention to the publication of scientific results. At present, modern scientific publishing in the Arab world is in its developmental stage. Arab biomedical journals are less than 300, most of which are published in Egypt, Lebanon, and the Kingdom of Saudi Arabia. Yet, many of these journals do not have on-line access or are indexed in major bibliographic databases. The majority of indexed journals, however, do not have a stable presence in the popular PubMed database and their indexes are discontinued since 2001. The exposure of Arab biomedical journals in international indices undoubtedly plays an important role in improving the scientific quality of these journals. The successful examples discussed in this review encourage us to call for the formation of a consortium of Arab biomedical journal publishers to assist in redressing the balance of the region from biomedical data consumption to data production.
Vizcaíno, Juan Antonio; Foster, Joseph M.; Martens, Lennart
2010-01-01
Despite the fact that data deposition is not a generalised fact yet in the field of proteomics, several mass spectrometry (MS) based proteomics repositories are publicly available for the scientific community. The main existing resources are: the Global Proteome Machine Database (GPMDB), PeptideAtlas, the PRoteomics IDEntifications database (PRIDE), Tranche, and NCBI Peptidome. In this review the capabilities of each of these will be described, paying special attention to four key properties: data types stored, applicable data submission strategies, supported formats, and available data mining and visualization tools. Additionally, the data contents from model organisms will be enumerated for each resource. There are other valuable smaller and/or more specialized repositories but they will not be covered in this review. Finally, the concept behind the ProteomeXchange consortium, a collaborative effort among the main resources in the field, will be introduced. PMID:20615486
NASA Astrophysics Data System (ADS)
Michel, L.; Motch, C.; Pineau, F. X.
2009-05-01
As members of the Survey Science Consortium of the XMM-Newton mission the Strasbourg Observatory is in charge of the real-time cross-correlations of X-ray data with archival catalogs. We also are committed to provide a specific tools to handle these cross-correlations and propose identifications at other wavelengths. In order to do so, we developed a database generator (Saada) managing persitent links and supporting heterogeneous input datasets. This system allows to easily build an archive containing numerous and complex links between individual items [1]. It also offers a powerfull query engine able to select sources on the basis of the properties (existence, distance, colours) of the X-ray-archival associations. We present such a database in operation for the 2XMMi catalogue. This system is flexible enough to provide both a public data interface and a servicing interface which could be used in the framework of the Simbol-X ground segment.
LungMAP: The Molecular Atlas of Lung Development Program
Ardini-Poleske, Maryanne E.; Ansong, Charles; Carson, James P.; Corley, Richard A.; Deutsch, Gail H.; Hagood, James S.; Kaminski, Naftali; Mariani, Thomas J.; Potter, Steven S.; Pryhuber, Gloria S.; Warburton, David; Whitsett, Jeffrey A.; Palmer, Scott M.; Ambalavanan, Namasivayam
2017-01-01
The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. PMID:28798251
Quanbeck, Stephanie M.; Brachova, Libuse; Campbell, Alexis A.; Guan, Xin; Perera, Ann; He, Kun; Rhee, Seung Y.; Bais, Preeti; Dickerson, Julie A.; Dixon, Philip; Wohlgemuth, Gert; Fiehn, Oliver; Barkan, Lenore; Lange, Iris; Lange, B. Markus; Lee, Insuk; Cortes, Diego; Salazar, Carolina; Shuman, Joel; Shulaev, Vladimir; Huhman, David V.; Sumner, Lloyd W.; Roth, Mary R.; Welti, Ruth; Ilarslan, Hilal; Wurtele, Eve S.; Nikolau, Basil J.
2012-01-01
Metabolomics is the methodology that identifies and measures global pools of small molecules (of less than about 1,000 Da) of a biological sample, which are collectively called the metabolome. Metabolomics can therefore reveal the metabolic outcome of a genetic or environmental perturbation of a metabolic regulatory network, and thus provide insights into the structure and regulation of that network. Because of the chemical complexity of the metabolome and limitations associated with individual analytical platforms for determining the metabolome, it is currently difficult to capture the complete metabolome of an organism or tissue, which is in contrast to genomics and transcriptomics. This paper describes the analysis of Arabidopsis metabolomics data sets acquired by a consortium that includes five analytical laboratories, bioinformaticists, and biostatisticians, which aims to develop and validate metabolomics as a hypothesis-generating functional genomics tool. The consortium is determining the metabolomes of Arabidopsis T-DNA mutant stocks, grown in standardized controlled environment optimized to minimize environmental impacts on the metabolomes. Metabolomics data were generated with seven analytical platforms, and the combined data is being provided to the research community to formulate initial hypotheses about genes of unknown function (GUFs). A public database (www.PlantMetabolomics.org) has been developed to provide the scientific community with access to the data along with tools to allow for its interactive analysis. Exemplary datasets are discussed to validate the approach, which illustrate how initial hypotheses can be generated from the consortium-produced metabolomics data, integrated with prior knowledge to provide a testable hypothesis concerning the functionality of GUFs. PMID:22645570
DeLisa, J A; Jain, S S; Kirshblum, S
1998-01-01
Decision makers at the federal and state level are considering, and some states have enacted, a reduction in total United States residency positions, a shift in emphasis from specialist to generalist training, a need for programs to join together in training consortia to determine local residency position allocation strategy, a reduction in funding of international medical graduates, and a reduction in funding beyond the first certificate or a total of five years. A 5-page, 24-item questionnaire was sent to all physiatry residency training directors. The objective was to discern a descriptive database of physiatry training programs and how their institutions might respond to cuts in graduate medical education funding. Fifty-eight (73%) of the questionnaires were returned. Most training directors believe that their primary mission is to train general physiatrists and, to a much lesser extent, to train subspecialty or research fellows. Directors were asked how they might handle reductions in house staff such as using physician extenders, shifting clinical workload to faculty, hiring additional faculty, and funding physiatry residents from practice plans and endowments. Physiatry has had little experience (29%; 17/58) with voluntary graduate medical education consortiums, but most (67%; 34/58) seem to feel that if a consortium system is mandated, they would favor a local or regional over a national body because they do not believe the specialty has a strong enough national stature. The major barriers to a consortium for graduate medical education allocation were governance, academic, fiscal, bureaucratic, and competition.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
2016-01-01
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-03-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2016-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information from a non-BIA bureau? 1000.73 Section 1000.73 Indians OFFICE OF THE... § 1000.73 Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...
Engineering Ligninolytic Consortium for Bioconversion of Lignocelluloses to Ethanol and Chemicals.
Bilal, Muhammad; Nawaz, Muhammad Zohaib; Iqbal, Hafiz M N; Hou, Jialin; Mahboob, Shahid; Al-Ghanim, Khalid A; Cheng, Hairong
2018-01-01
Rising environmental concerns and recent global scenario of cleaner production and consumption are leading to the design of green industrial processes to produce alternative fuels and chemicals. Although bioethanol is one of the most promising and eco-friendly alternatives to fossil fuels yet its production from food and feed has received much negative criticism. The main objective of this study was to present the noteworthy potentialities of lignocellulosic biomass as an enormous and renewable biological resource. The particular focus was also given on engineering ligninolytic consortium for bioconversion of lignocelluloses to ethanol and chemicals on sustainable and environmentally basis. Herein, an effort has been made to extensively review, analyze and compile salient information related to the topic of interest. Several authentic bibliographic databases including PubMed, Scopus, Elsevier, Springer, Bentham Science and other scientific databases were searched with utmost care, and inclusion/ exclusion criterion was adopted to appraise the quality of retrieved peer-reviewed research literature. Bioethanol production from lignocellulosic biomass can largely satisfy the possible inconsistency of first-generation ethanol since it utilizes inedible lignocellulosic feedstocks, primarily sourced from agriculture and forestry wastes. Two major polysaccharides in lignocellulosic biomass namely, cellulose and hemicellulose constitute a complex lignocellulosic network by connecting with lignin, which is highly recalcitrant to depolymerization. Several attempts have been made to reduce the cost involved in the process through improving the pretreatment process. While, the ligninolytic enzymes of white rot fungi (WRF) including laccase, lignin peroxidase (LiP), and manganese peroxidase (MnP) have appeared as versatile biocatalysts for delignification of several lignocellulosic residues. The first part of the review is mainly focused on engineering ligninolytic consortium. In the second part, WRF and its unique ligninolytic enzyme-based bio-delignification of lignocellulosic biomass, enzymatic hydrolysis, and fermentation of hydrolyzed feedstock are discussed. The metabolic engineering, enzymatic engineering, synthetic biology aspects for ethanol production and platform chemicals production are comprehensively reviewed in the third part. Towards the end information is also given on futuristic viewpoints. In conclusion, given the present unpredicted scenario of energy and fuel crisis accompanied by global warming, lignocellulosic bioethanol holds great promise as an alternative to petroleum. Apart from bioethanol, the simultaneous production of other value-added products may improve the economics of lignocellulosic bioethanol bioconversion process. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary
2018-01-01
Purpose The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. Materials and Methods This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R2, Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Results Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R2, 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R2, 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Conclusion Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice. PMID:28253564
Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary
2018-01-01
The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R 2 , Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R 2 , 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R 2 , 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice.
Hayn, Matthew H; Hussain, Abid; Mansour, Ahmed M; Andrews, Paul E; Carpentier, Paul; Castle, Erik; Dasgupta, Prokar; Rimington, Peter; Thomas, Raju; Khan, Shamim; Kibel, Adam; Kim, Hyung; Manoharan, Murugesan; Menon, Mani; Mottrie, Alex; Ornstein, David; Peabody, James; Pruthi, Raj; Palou Redorta, Joan; Richstone, Lee; Schanne, Francis; Stricker, Hans; Wiklund, Peter; Chandrasekhar, Rameela; Wilding, Greg E; Guru, Khurshid A
2010-08-01
Robot-assisted radical cystectomy (RARC) has evolved as a minimally invasive alternative to open radical cystectomy for patients with invasive bladder cancer. We sought to define the learning curve for RARC by evaluating results from a multicenter, contemporary, consecutive series of patients who underwent this procedure. Utilizing the International Robotic Cystectomy Consortium database, a prospectively maintained and institutional review board-approved database, we identified 496 patients who underwent RARC by 21 surgeons at 14 institutions from 2003 to 2009. Cut-off points for operative time, lymph node yield (LNY), estimated blood loss (EBL), and margin positivity were identified. Using specifically designed statistical mixed models, we were able to inversely predict the number of patients required for an institution to reach the predetermined cut-off points. Mean operative time was 386 min, mean EBL was 408 ml, and mean LNY was 18. Overall, 34 of 482 patients (7%) had a positive surgical margin (PSM). Using statistical models, it was estimated that 21 patients were required for operative time to reach 6.5h and 8, 20, and 30 patients were required to reach an LNY of 12, 16, and 20, respectively. For all patients, PSM rates of <5% were achieved after 30 patients. For patients with pathologic stage higher than T2, PSM rates of <15% were achieved after 24 patients. RARC is a challenging procedure but is a technique that is reproducible throughout multiple centers. This report helps to define the learning curve for RARC and demonstrates an acceptable level of proficiency by the 30th case for proxy measures of RARC quality. Copyright (c) 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
National Land Cover Database 2001 (NLCD01)
LaMotte, Andrew E.
2016-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data.
Zhang, Junjun; Baran, Joachim; Cros, A; Guberman, Jonathan M; Haider, Syed; Hsu, Jack; Liang, Yong; Rivkin, Elena; Wang, Jianxin; Whitty, Brett; Wong-Erasmus, Marie; Yao, Long; Kasprzyk, Arek
2011-01-01
The International Cancer Genome Consortium (ICGC) is a collaborative effort to characterize genomic abnormalities in 50 different cancer types. To make this data available, the ICGC has created the ICGC Data Portal. Powered by the BioMart software, the Data Portal allows each ICGC member institution to manage and maintain its own databases locally, while seamlessly presenting all the data in a single access point for users. The Data Portal currently contains data from 24 cancer projects, including ICGC, The Cancer Genome Atlas (TCGA), Johns Hopkins University, and the Tumor Sequencing Project. It consists of 3478 genomes and 13 cancer types and subtypes. Available open access data types include simple somatic mutations, copy number alterations, structural rearrangements, gene expression, microRNAs, DNA methylation and exon junctions. Additionally, simple germline variations are available as controlled access data. The Data Portal uses a web-based graphical user interface (GUI) to offer researchers multiple ways to quickly and easily search and analyze the available data. The web interface can assist in constructing complicated queries across multiple data sets. Several application programming interfaces are also available for programmatic access. Here we describe the organization, functionality, and capabilities of the ICGC Data Portal.
Lung Nodule Detection via Deep Reinforcement Learning.
Ali, Issa; Hart, Gregory R; Gunabushanam, Gowthaman; Liang, Ying; Muhammad, Wazir; Nartowt, Bradley; Kane, Michael; Ma, Xiaomei; Deng, Jun
2018-01-01
Lung cancer is the most common cause of cancer-related death globally. As a preventive measure, the United States Preventive Services Task Force (USPSTF) recommends annual screening of high risk individuals with low-dose computed tomography (CT). The resulting volume of CT scans from millions of people will pose a significant challenge for radiologists to interpret. To fill this gap, computer-aided detection (CAD) algorithms may prove to be the most promising solution. A crucial first step in the analysis of lung cancer screening results using CAD is the detection of pulmonary nodules, which may represent early-stage lung cancer. The objective of this work is to develop and validate a reinforcement learning model based on deep artificial neural networks for early detection of lung nodules in thoracic CT images. Inspired by the AlphaGo system, our deep learning algorithm takes a raw CT image as input and views it as a collection of states, and output a classification of whether a nodule is present or not. The dataset used to train our model is the LIDC/IDRI database hosted by the lung nodule analysis (LUNA) challenge. In total, there are 888 CT scans with annotations based on agreement from at least three out of four radiologists. As a result, there are 590 individuals having one or more nodules, and 298 having none. Our training results yielded an overall accuracy of 99.1% [sensitivity 99.2%, specificity 99.1%, positive predictive value (PPV) 99.1%, negative predictive value (NPV) 99.2%]. In our test, the results yielded an overall accuracy of 64.4% (sensitivity 58.9%, specificity 55.3%, PPV 54.2%, and NPV 60.0%). These early results show promise in solving the major issue of false positives in CT screening of lung nodules, and may help to save unnecessary follow-up tests and expenditures.
Lung Nodule Detection via Deep Reinforcement Learning
Ali, Issa; Hart, Gregory R.; Gunabushanam, Gowthaman; Liang, Ying; Muhammad, Wazir; Nartowt, Bradley; Kane, Michael; Ma, Xiaomei; Deng, Jun
2018-01-01
Lung cancer is the most common cause of cancer-related death globally. As a preventive measure, the United States Preventive Services Task Force (USPSTF) recommends annual screening of high risk individuals with low-dose computed tomography (CT). The resulting volume of CT scans from millions of people will pose a significant challenge for radiologists to interpret. To fill this gap, computer-aided detection (CAD) algorithms may prove to be the most promising solution. A crucial first step in the analysis of lung cancer screening results using CAD is the detection of pulmonary nodules, which may represent early-stage lung cancer. The objective of this work is to develop and validate a reinforcement learning model based on deep artificial neural networks for early detection of lung nodules in thoracic CT images. Inspired by the AlphaGo system, our deep learning algorithm takes a raw CT image as input and views it as a collection of states, and output a classification of whether a nodule is present or not. The dataset used to train our model is the LIDC/IDRI database hosted by the lung nodule analysis (LUNA) challenge. In total, there are 888 CT scans with annotations based on agreement from at least three out of four radiologists. As a result, there are 590 individuals having one or more nodules, and 298 having none. Our training results yielded an overall accuracy of 99.1% [sensitivity 99.2%, specificity 99.1%, positive predictive value (PPV) 99.1%, negative predictive value (NPV) 99.2%]. In our test, the results yielded an overall accuracy of 64.4% (sensitivity 58.9%, specificity 55.3%, PPV 54.2%, and NPV 60.0%). These early results show promise in solving the major issue of false positives in CT screening of lung nodules, and may help to save unnecessary follow-up tests and expenditures. PMID:29713615
Development of a 2001 National Land Cover Database for the United States
Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael
2004-01-01
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome
Sarika; Arora, Vasu; Iquebal, M. A.; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on ‘three-tier architecture’ that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers’ search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/ PMID:23396298
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome.
Sarika; Arora, Vasu; Iquebal, M A; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on 'three-tier architecture' that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers' search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
Korean Variant Archive (KOVA): a reference database of genetic variations in the Korean population.
Lee, Sangmoon; Seo, Jihae; Park, Jinman; Nam, Jae-Yong; Choi, Ahyoung; Ignatius, Jason S; Bjornson, Robert D; Chae, Jong-Hee; Jang, In-Jin; Lee, Sanghyuk; Park, Woong-Yang; Baek, Daehyun; Choi, Murim
2017-06-27
Despite efforts to interrogate human genome variation through large-scale databases, systematic preference toward populations of Caucasian descendants has resulted in unintended reduction of power in studying non-Caucasians. Here we report a compilation of coding variants from 1,055 healthy Korean individuals (KOVA; Korean Variant Archive). The samples were sequenced to a mean depth of 75x, yielding 101 singleton variants per individual. Population genetics analysis demonstrates that the Korean population is a distinct ethnic group comparable to other discrete ethnic groups in Africa and Europe, providing a rationale for such independent genomic datasets. Indeed, KOVA conferred 22.8% increased variant filtering power in addition to Exome Aggregation Consortium (ExAC) when used on Korean exomes. Functional assessment of nonsynonymous variant supported the presence of purifying selection in Koreans. Analysis of copy number variants detected 5.2 deletions and 10.3 amplifications per individual with an increased fraction of novel variants among smaller and rarer copy number variable segments. We also report a list of germline variants that are associated with increased tumor susceptibility. This catalog can function as a critical addition to the pre-existing variant databases in pursuing genetic studies of Korean individuals.
Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco
2014-05-01
The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.
LungMAP: The Molecular Atlas of Lung Development Program.
Ardini-Poleske, Maryanne E; Clark, Robert F; Ansong, Charles; Carson, James P; Corley, Richard A; Deutsch, Gail H; Hagood, James S; Kaminski, Naftali; Mariani, Thomas J; Potter, Steven S; Pryhuber, Gloria S; Warburton, David; Whitsett, Jeffrey A; Palmer, Scott M; Ambalavanan, Namasivayam
2017-11-01
The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. Copyright © 2017 the American Physiological Society.
Fernando, Ruani N; Chaudhari, Umesh; Escher, Sylvia E; Hengstler, Jan G; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C; Kleinjans, Jos C S; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G J; van de Water, Bob; Vivier, Manon; Zahedi, René P; Vinken, Mathieu; Rogiers, Vera
2016-06-01
SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro- and in silico-based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and "-omics" technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitute a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich "-omics" databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project.
Fernando, Ruani N.; Chaudhari, Umesh; Escher, Sylvia E.; Hengstler, Jan G.; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C.; Kleinjans, Jos C. S.; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M.; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G.J.; van de Water, Bob; Vivier, Manon; Zahedi, René P.
2017-01-01
SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro and in silico based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and “-omics” technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitutes a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich “-omics” databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project. PMID:27129694
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, K; Phillips, M; Fishburn, M
Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less
Establishment of Kawasaki disease database based on metadata standard.
Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk
2016-07-01
Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.
2016 update of the PRIDE database and its related tools
Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning
2016-01-01
The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722
PanScan, the Pancreatic Cancer Cohort Consortium, and the Pancreatic Cancer Case-Control Consortium
The Pancreatic Cancer Cohort Consortium consists of more than a dozen prospective epidemiologic cohort studies within the NCI Cohort Consortium, whose leaders work together to investigate the etiology and natural history of pancreatic cancer.
Automated detection of lung nodules with three-dimensional convolutional neural networks
NASA Astrophysics Data System (ADS)
Pérez, Gustavo; Arbeláez, Pablo
2017-11-01
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
Northern New Jersey Nursing Education Consortium: a partnership for graduate nursing education.
Quinless, F W; Levin, R F
1998-01-01
The purpose of this article is to describe the evolution and implementation of the Northern New Jersey Nursing Education consortium--a consortium of seven member institutions established in 1992. Details regarding the specific functions of the consortium relative to cross-registration of students in graduate courses, financial disbursement of revenue, faculty development activities, student services, library privileges, and institutional research review board mechanisms are described. The authors also review the administrative organizational structure through which the work conducted by the consortium occurs. Both the advantages and disadvantages of such a graduate consortium are explored, and specific examples of recent potential and real conflicts are fully discussed. The authors detail governance and structure of the consortium as a potential model for replication in other environments.
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Participation in Tribal Self-Governance Withdrawal from A Consortium Annual Funding Agreement § 1000.33 What... the Consortium agreement is reduced if: (1) The Consortium, Tribe, OSG, and bureau agree it is...
Patel, Ashokkumar A; Gilbertson, John R; Showe, Louise C; London, Jack W; Ross, Eric; Ochs, Michael F; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V; Dhir, Rajiv; Beck, J Robert; Liebman, Michael; Garcia, Fernando U; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B; Becich, Michael J
2007-06-08
The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium's intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions' IRB/HIPAA standards. Currently, this "virtual biorepository" has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium's web site. The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.
25 CFR 1000.310 - What information must the Tribe's/Consortium's response contain?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What information must the Tribe's/Consortium's response... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.310 What information must the Tribe's/Consortium's response contain? (a) The Tribe's/Consortium's response must indicate the specific measures that...
25 CFR 1000.255 - May a Tribe/Consortium reallocate funds among construction programs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false May a Tribe/Consortium reallocate funds among... INDIAN SELF-DETERMINATION AND EDUCATION ACT Construction § 1000.255 May a Tribe/Consortium reallocate funds among construction programs? Yes, a Tribe/Consortium may reallocate funds among construction...
25 CFR 1000.310 - What information must the Tribe's/Consortium's response contain?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What information must the Tribe's/Consortium's response... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.310 What information must the Tribe's/Consortium's response contain? (a) The Tribe's/Consortium's response must indicate the specific measures that...
25 CFR 1000.255 - May a Tribe/Consortium reallocate funds among construction programs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false May a Tribe/Consortium reallocate funds among... INDIAN SELF-DETERMINATION AND EDUCATION ACT Construction § 1000.255 May a Tribe/Consortium reallocate funds among construction programs? Yes, a Tribe/Consortium may reallocate funds among construction...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development Regulations Relating to Housing and... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.
2013-12-01
The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.
Baig, Sheharyar S; Strong, Mark; Rosser, Elisabeth; Taverner, Nicola V; Glew, Ruth; Miedzybrodzka, Zosia; Clarke, Angus; Craufurd, David; Quarrell, Oliver W
2016-10-01
Huntington's disease (HD) is a progressive neurodegenerative condition. At-risk individuals have accessed predictive testing via direct mutation testing since 1993. The UK Huntington's Prediction Consortium has collected anonymised data on UK predictive tests, annually, from 1993 to 2014: 9407 predictive tests were performed across 23 UK centres. Where gender was recorded, 4077 participants were male (44.3%) and 5122 were female (55.7%). The median age of participants was 37 years. The most common reason for predictive testing was to reduce uncertainty (70.5%). Of the 8441 predictive tests on individuals at 50% prior risk, 4629 (54.8%) were reported as mutation negative and 3790 (44.9%) were mutation positive, with 22 (0.3%) in the database being uninterpretable. Using a prevalence figure of 12.3 × 10(-5), the cumulative uptake of predictive testing in the 50% at-risk UK population from 1994 to 2014 was estimated at 17.4% (95% CI: 16.9-18.0%). We present the largest study conducted on predictive testing in HD. Our findings indicate that the vast majority of individuals at risk of HD (>80%) have not undergone predictive testing. Future therapies in HD will likely target presymptomatic individuals; therefore, identifying the at-risk population whose gene status is unknown is of significant public health value.
Papachristou, Georgios I; Machicado, Jorge D; Stevens, Tyler; Goenka, Mahesh Kumar; Ferreira, Miguel; Gutierrez, Silvia C; Singh, Vikesh K; Kamal, Ayesha; Gonzalez-Gonzalez, Jose A; Pelaez-Luna, Mario; Gulla, Aiste; Zarnescu, Narcis O; Triantafyllou, Konstantinos; Barbu, Sorin T; Easler, Jeffrey; Ocampo, Carlos; Capurso, Gabriele; Archibugi, Livia; Cote, Gregory A; Lambiase, Louis; Kochhar, Rakesh; Chua, Tiffany; Tiwari, Subhash Ch; Nawaz, Haq; Park, Walter G; de-Madaria, Enrique; Lee, Peter J; Wu, Bechien U; Greer, Phil J; Dugum, Mohannad; Koutroumpakis, Efstratios; Akshintala, Venkata; Gougol, Amir
2017-01-01
We have established a multicenter international consortium to better understand the natural history of acute pancreatitis (AP) worldwide and to develop a platform for future randomized clinical trials. The AP patient registry to examine novel therapies in clinical experience (APPRENTICE) was formed in July 2014. Detailed web-based questionnaires were then developed to prospectively capture information on demographics, etiology, pancreatitis history, comorbidities, risk factors, severity biomarkers, severity indices, health-care utilization, management strategies, and outcomes of AP patients. Between November 2015 and September 2016, a total of 20 sites (8 in the United States, 5 in Europe, 3 in South America, 2 in Mexico and 2 in India) prospectively enrolled 509 AP patients. All data were entered into the REDCap (Research Electronic Data Capture) database by participating centers and systematically reviewed by the coordinating site (University of Pittsburgh). The approaches and methodology are described in detail, along with an interim report on the demographic results. APPRENTICE, an international collaboration of tertiary AP centers throughout the world, has demonstrated the feasibility of building a large, prospective, multicenter patient registry to study AP. Analysis of the collected data may provide a greater understanding of AP and APPRENTICE will serve as a future platform for randomized clinical trials.
Retrospective access to data: the ENGAGE consent experience
Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R
2010-01-01
The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium. PMID:20332813
Retrospective access to data: the ENGAGE consent experience.
Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R
2010-07-01
The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium.
HEROD: a human ethnic and regional specific omics database.
Zeng, Xian; Tao, Lin; Zhang, Peng; Qin, Chu; Chen, Shangying; He, Weidong; Tan, Ying; Xia Liu, Hong; Yang, Sheng Yong; Chen, Zhe; Jiang, Yu Yang; Chen, Yu Zong
2017-10-15
Genetic and gene expression variations within and between populations and across geographical regions have substantial effects on the biological phenotypes, diseases, and therapeutic response. The development of precision medicines can be facilitated by the OMICS studies of the patients of specific ethnicity and geographic region. However, there is an inadequate facility for broadly and conveniently accessing the ethnic and regional specific OMICS data. Here, we introduced a new free database, HEROD, a human ethnic and regional specific OMICS database. Its first version contains the gene expression data of 53 070 patients of 169 diseases in seven ethnic populations from 193 cities/regions in 49 nations curated from the Gene Expression Omnibus (GEO), the ArrayExpress Archive of Functional Genomics Data (ArrayExpress), the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC). Geographic region information of curated patients was mainly manually extracted from referenced publications of each original study. These data can be accessed and downloaded via keyword search, World map search, and menu-bar search of disease name, the international classification of disease code, geographical region, location of sample collection, ethnic population, gender, age, sample source organ, patient type (patient or healthy), sample type (disease or normal tissue) and assay type on the web interface. The HEROD database is freely accessible at http://bidd2.nus.edu.sg/herod/index.php. The database and web interface are implemented in MySQL, PHP and HTML with all major browsers supported. phacyz@nus.edu.sg. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Das, I.; Oberai, K.; Sarathi Roy, P.
2012-07-01
Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.
AphidBase: A centralized bioinformatic resource for annotation of the pea aphid genome
Legeai, Fabrice; Shigenobu, Shuji; Gauthier, Jean-Pierre; Colbourne, John; Rispe, Claude; Collin, Olivier; Richards, Stephen; Wilson, Alex C. C.; Tagu, Denis
2015-01-01
AphidBase is a centralized bioinformatic resource that was developed to facilitate community annotation of the pea aphid genome by the International Aphid Genomics Consortium (IAGC). The AphidBase Information System designed to organize and distribute genomic data and annotations for a large international community was constructed using open source software tools from the Generic Model Organism Database (GMOD). The system includes Apollo and GBrowse utilities as well as a wiki, blast search capabilities and a full text search engine. AphidBase strongly supported community cooperation and coordination in the curation of gene models during community annotation of the pea aphid genome. AphidBase can be accessed at http://www.aphidbase.com. PMID:20482635
Code of Federal Regulations, 2010 CFR
2010-04-01
... requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMENT ACT AMENDMENTS TO THE INDIAN SELF-DETERMINATION AND EDUCATION ACT... minimum standards for Tribe/Consortium management systems? Yes, the Tribe/Consortium must maintain...
25 CFR 1000.169 - How does a Tribe/Consortium initiate the information phase?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium initiate the information... of Initial Annual Funding Agreements § 1000.169 How does a Tribe/Consortium initiate the information phase? A Tribe/Consortium initiates the information phase by submitting a letter of interest to the...
25 CFR 1000.169 - How does a Tribe/Consortium initiate the information phase?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium initiate the information... of Initial Annual Funding Agreements § 1000.169 How does a Tribe/Consortium initiate the information phase? A Tribe/Consortium initiates the information phase by submitting a letter of interest to the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... International Consortium of Energy Managers; Notice of Preliminary Permit Application Accepted for Filing and... Consortium of Energy Managers filed an application, pursuant to section 4(f) of the Federal Power Act (FPA...: Rexford Wait, International Consortium of Energy Managers, 2416 Cades Way, Vista, CA 92083; (760) 599-0086...
Patel, Vilas; Patel, Janki; Madamwar, Datta
2013-09-15
A phenanthrene-degrading bacterial consortium (ASP) was developed using sediment from the Alang-Sosiya shipbreaking yard at Gujarat, India. 16S rRNA gene-based molecular analyses revealed that the bacterial consortium consisted of six bacterial strains: Bacillus sp. ASP1, Pseudomonas sp. ASP2, Stenotrophomonas maltophilia strain ASP3, Staphylococcus sp. ASP4, Geobacillus sp. ASP5 and Alcaligenes sp. ASP6. The consortium was able to degrade 300 ppm of phenanthrene and 1000 ppm of naphthalene within 120 h and 48 h, respectively. Tween 80 showed a positive effect on phenanthrene degradation. The consortium was able to consume maximum phenanthrene at the rate of 46 mg/h/l and degrade phenanthrene in the presence of other petroleum hydrocarbons. A microcosm study was conducted to test the consortium's bioremediation potential. Phenanthrene degradation increased from 61% to 94% in sediment bioaugmented with the consortium. Simultaneously, bacterial counts and dehydrogenase activities also increased in the bioaugmented sediment. These results suggest that microbial consortium bioaugmentation may be a promising technology for bioremediation. Copyright © 2013 Elsevier Ltd. All rights reserved.
25 CFR 1000.425 - How does a Tribe/Consortium request an informal conference?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium request an informal... INDIAN SELF-DETERMINATION AND EDUCATION ACT Appeals § 1000.425 How does a Tribe/Consortium request an informal conference? The Tribe/Consortium shall file its request for an informal conference with the office...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Does a Tribe/Consortium have additional ongoing requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... Miscellaneous Provisions § 1000.396 Does a Tribe/Consortium have additional ongoing requirements to maintain...
25 CFR 1000.222 - How does a Tribe/Consortium obtain a waiver?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium obtain a waiver? 1000.222...-DETERMINATION AND EDUCATION ACT Waiver of Regulations § 1000.222 How does a Tribe/Consortium obtain a waiver? To obtain a waiver, the Tribe/Consortium must: (a) Submit a written request from the designated Tribal...
25 CFR 1000.333 - How does a Tribe/Consortium retrocede a program?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium retrocede a program? 1000.333...-DETERMINATION AND EDUCATION ACT Retrocession § 1000.333 How does a Tribe/Consortium retrocede a program? The Tribe/Consortium must submit: (a) A written notice to: (1) The Office of Self-Governance for BIA...
25 CFR 1000.425 - How does a Tribe/Consortium request an informal conference?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium request an informal... INDIAN SELF-DETERMINATION AND EDUCATION ACT Appeals § 1000.425 How does a Tribe/Consortium request an informal conference? The Tribe/Consortium shall file its request for an informal conference with the office...
25 CFR 1000.400 - Can a Tribe/Consortium retain savings from programs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Can a Tribe/Consortium retain savings from programs? 1000...-DETERMINATION AND EDUCATION ACT Miscellaneous Provisions § 1000.400 Can a Tribe/Consortium retain savings from programs? Yes, for BIA programs, the Tribe/Consortium may retain savings for each fiscal year during which...
25 CFR 1000.315 - When must the Tribe/Consortium return funds to the Department?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false When must the Tribe/Consortium return funds to the... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.315 When must the Tribe/Consortium return funds to the Department? The Tribe/Consortium must repay funds to the Department as soon as practical...
25 CFR 1000.333 - How does a Tribe/Consortium retrocede a program?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium retrocede a program? 1000.333...-DETERMINATION AND EDUCATION ACT Retrocession § 1000.333 How does a Tribe/Consortium retrocede a program? The Tribe/Consortium must submit: (a) A written notice to: (1) The Office of Self-Governance for BIA...
25 CFR 1000.315 - When must the Tribe/Consortium return funds to the Department?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false When must the Tribe/Consortium return funds to the... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.315 When must the Tribe/Consortium return funds to the Department? The Tribe/Consortium must repay funds to the Department as soon as practical...
Metro-Minnesota Community Clinical Oncology Program (MM-CCOP) | Division of Cancer Prevention
The Metro-Minnesota Community Clinical Oncology (MMCCOP) program has a long-standing history which clearly demonstrates the success of the consortium, as demonstrated by both the ongoing commitment of the original consortium members and the growth of the consortium from 1979 through 2014. The MMCCOP consortium represents an established community program base which began in
Drug development and nonclinical to clinical translational databases: past and current efforts.
Monticello, Thomas M
2015-01-01
The International Consortium for Innovation and Quality (IQ) in Pharmaceutical Development is a science-focused organization of pharmaceutical and biotechnology companies. The mission of the Preclinical Safety Leadership Group (DruSafe) of the IQ is to advance science-based standards for nonclinical development of pharmaceutical products and to promote high-quality and effective nonclinical safety testing that can enable human risk assessment. DruSafe is creating an industry-wide database to determine the accuracy with which the interpretation of nonclinical safety assessments in animal models correctly predicts human risk in the early clinical development of biopharmaceuticals. This initiative aligns with the 2011 Food and Drug Administration strategic plan to advance regulatory science and modernize toxicology to enhance product safety. Although similar in concept to the initial industry-wide concordance data set conducted by International Life Sciences Institute's Health and Environmental Sciences Institute (HESI/ILSI), the DruSafe database will proactively track concordance, include exposure data and large and small molecules, and will continue to expand with longer duration nonclinical and clinical study comparisons. The output from this work will help identify actual human and animal adverse event data to define both the reliability and the potential limitations of nonclinical data and testing paradigms in predicting human safety in phase 1 clinical trials. © 2014 by The Author(s).
International Lymphoma Epidemiology Consortium
The InterLymph Consortium, or formally the International Consortium of Investigators Working on Non-Hodgkin's Lymphoma Epidemiologic Studies, is an open scientific forum for epidemiologic research in non-Hodgkin's lymphoma.
Establishment of a Multi-State Experiential Pharmacy Program Consortium
Unterwagner, Whitney L.; Byrd, Debbie C.
2008-01-01
In 2002, a regional consortium was created for schools and colleges of pharmacy in Georgia and Alabama to assist experiential education faculty and staff members in streamlining administrative processes, providing required preceptor development, establishing a professional network, and conducting scholarly endeavors. Five schools and colleges of pharmacy with many shared experiential practice sites formed a consortium to help experiential faculty and staff members identify, discuss, and solve common experience program issues and challenges. During its 5 years in existence, the Southeastern Pharmacy Experiential Education Consortium has coordinated experiential schedules, developed and implemented uniform evaluation tools, coordinated site and preceptor development activities, established a work group for educational research and scholarship, and provided opportunities for networking and professional development. Several consortium members have received national recognition for their individual experiential education accomplishments. Through the activities of a regional consortium, members have successfully developed programs and initiatives that have streamlined administrative processes and have the potential to improve overall quality of experiential education programs. Professionally, consortium activities have resulted in 5 national presentations. PMID:18698386
E-MSD: improving data deposition and structure quality.
Tagari, M; Tate, J; Swaminathan, G J; Newman, R; Naim, A; Vranken, W; Kapopoulou, A; Hussain, A; Fillon, J; Henrick, K; Velankar, S
2006-01-01
The Macromolecular Structure Database (MSD) (http://www.ebi.ac.uk/msd/) [H. Boutselakis, D. Dimitropoulos, J. Fillon, A. Golovin, K. Henrick, A. Hussain, J. Ionides, M. John, P. A. Keller, E. Krissinel et al. (2003) E-MSD: the European Bioinformatics Institute Macromolecular Structure Database. Nucleic Acids Res., 31, 458-462.] group is one of the three partners in the worldwide Protein DataBank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data [H. Berman, K. Henrick and H. Nakamura (2003) Announcing the worldwide Protein Data Bank. Nature Struct. Biol., 10, 980.]. Since its inception, the MSD group has worked with partners around the world to improve the quality of PDB data, through a clean up programme that addresses inconsistencies and inaccuracies in the legacy archive. The improvements in data quality in the legacy archive have been achieved largely through the creation of a unified data archive, in the form of a relational database that stores all of the data in the wwPDB. The three partners are working towards improving the tools and methods for the deposition of new data by the community at large. The implementation of the MSD database, together with the parallel development of improved tools and methodologies for data harvesting, validation and archival, has lead to significant improvements in the quality of data that enters the archive. Through this and related projects in the NMR and EM realms the MSD continues to improve the quality of publicly available structural data.
Hancock, Matthew C.; Magnan, Jerry F.
2016-01-01
Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453
Hancock, Matthew C; Magnan, Jerry F
2016-10-01
In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Sinaiko, Alan R; Jacobs, David R; Woo, Jessica G; Bazzano, Lydia; Burns, Trudy; Hu, Tian; Juonala, Markus; Prineas, Ronald; Raitakari, Olli; Steinberger, Julia; Urbina, Elaine; Venn, Alison; Jaquish, Cashell; Dwyer, Terry
2018-04-22
Although it is widely thought that childhood levels of cardiovascular (CV) risk factors are related to adult CV disease, longitudinal data directly linking the two are lacking. This paper describes the design and organization of the International Childhood Cardiovascular Cohort Consortium Outcomes Study (i3C Outcomes), the first longitudinal cohort study designed to locate adults with detailed, repeated, childhood biological, physical, and socioeconomic measurements and a harmonized database. I3C Outcomes uses a Heart Health Survey (HHS) to obtain information on adult CV endpoints, using mail, email, telephone, and clinic visits in the United States (U.S.) and Australia and a national health database in Finland. Microsoft Access, REsearch Data Capture (REDCap) (U.S.), LimeSurvey (Australia), and Medidata™ Rave data systems are used to collect, transfer and organize data. Self-reported CV events are adjudicated via hospital and doctor-released medical records. After the first two study years, participants (N = 10,968) were more likely to be female (56% vs. 48%), non-Hispanic white (90% vs. 80%), and older (10.4 ± 3.8 years vs. 9.4 ± 3.3 years) at their initial childhood study visit than the currently non-recruited cohort members. Over 48% of cohort members seen during both adulthood and childhood have been found and recruited, to date, vs. 5% of those not seen since childhood. Self-reported prevalences were 0.7% Type 1 Diabetes, 7.5% Type 2 Diabetes, 33% hypertension, and 12.8% CV event. 32% of CV events were judged to be true. I3C Outcomes is uniquely able to establish evidence-based guidelines for child health care and to clarify relations to adult CV disease. Copyright © 2018 Elsevier Inc. All rights reserved.
National Land Cover Database 2001 (NLCD01) Tile 2, Northeast United States: NLCD01_2
LaMotte, Andrew
2008-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
National Land Cover Database 2001 (NLCD01) Tile 3, Southwest United States: NLCD01_3
LaMotte, Andrew
2008-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg).The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
National Land Cover Database 2001 (NLCD01) Tile 1, Northwest United States: NLCD01_1
LaMotte, Andrew
2008-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
National Land Cover Database 2001 (NLCD01) Tile 4, Southeast United States: NLCD01_4
LaMotte, Andrew
2008-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
The eNanoMapper database for nanomaterial safety information
Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.
Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver
2010-06-30
Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.
The eNanoMapper database for nanomaterial safety information.
Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission
2010-01-01
Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175
Southeast Clinical Oncology Research Consortium, Inc. (SCOR) | Division of Cancer Prevention
The SCCC-Upstate is a merger of two successful legacy CCOPs known as Southeast Cancer Control Consortium, Inc. (SCCC) and Upstate Carolina (hereafter the Consortium) comprised of 23 components and 63 sub-components, located in a five-state area of the Southeast US (GA, NC, SC, TN, and VA) with a nonclinical Administrative Office (AO) in Winston-Salem, NC. The Consortium
25 CFR 1000.390 - How can a Tribe/Consortium hire a Federal employee to help implement an AFA?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false How can a Tribe/Consortium hire a Federal employee to help implement an AFA? 1000.390 Section 1000.390 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN... Tribe/Consortium hire a Federal employee to help implement an AFA? If a Tribe/Consortium chooses to hire...
25 CFR 1000.390 - How can a Tribe/Consortium hire a Federal employee to help implement an AFA?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How can a Tribe/Consortium hire a Federal employee to help implement an AFA? 1000.390 Section 1000.390 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN... Tribe/Consortium hire a Federal employee to help implement an AFA? If a Tribe/Consortium chooses to hire...
45 CFR 287.25 - May Tribes form a consortium to operate a NEW Program?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true May Tribes form a consortium to operate a NEW... SERVICES THE NATIVE EMPLOYMENT WORKS (NEW) PROGRAM Eligible Tribes § 287.25 May Tribes form a consortium to operate a NEW Program? (a) Yes, as long as each Tribe forming the consortium is an “eligible Indian tribe...
45 CFR 287.25 - May Tribes form a consortium to operate a NEW Program?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 2 2013-10-01 2012-10-01 true May Tribes form a consortium to operate a NEW... SERVICES THE NATIVE EMPLOYMENT WORKS (NEW) PROGRAM Eligible Tribes § 287.25 May Tribes form a consortium to operate a NEW Program? (a) Yes, as long as each Tribe forming the consortium is an “eligible Indian tribe...
45 CFR 287.25 - May Tribes form a consortium to operate a NEW Program?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 2 2012-10-01 2012-10-01 false May Tribes form a consortium to operate a NEW... SERVICES THE NATIVE EMPLOYMENT WORKS (NEW) PROGRAM Eligible Tribes § 287.25 May Tribes form a consortium to operate a NEW Program? (a) Yes, as long as each Tribe forming the consortium is an “eligible Indian tribe...
AFT-QuEST Consortium Yearbook. Proceedings of the QuEST Consortium (April 2-6, 1972).
ERIC Educational Resources Information Center
American Federation of Teachers, Washington, DC.
This book contains the proceedings from the QuEST Consortium held on April 2-6, 1972, which focused on problems of method and technique in teaching as well as on resource organization. The program schedule for the Consortium is presented with the following goals: (a) investigation of educational policy issues, action programs, and projects and (b)…
Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woosley, Stan
Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.
77 FR 43237 - Genome in a Bottle Consortium-Work Plan Review Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... in human whole genome variant calls. A principal motivation for this consortium is to enable... principal motivation for this consortium is to enable science-based regulatory oversight of clinical...
Midwest Transportation Consortium : 2003-2004 annual report.
DOT National Transportation Integrated Search
2004-01-01
Introduction: The Midwest Transportation Consortium (MTC) recently completed its fifth year : of operation. In doing so, the consortium has established itself as an effective : network that promotes the education of future transportation professional...
Consortium for military LCD display procurement
NASA Astrophysics Data System (ADS)
Echols, Gregg
2002-08-01
International Display Consortium (IDC) is the joining together of display companies to combined their buying power and obtained favorable terms with a major LCD manufacturer. Consolidating the buying power and grouping the demand enables the rugged display industry of avionics, ground vehicles, and ship based display manufacturers to have unencumbered access to high performance AMLCDs while greatly reducing risk and lowering cost. With an unrestricted supply of AMLCD displays, the consortium members have total control of their risk, cost, deliveries and added value partners. Every display manufacturer desires a very close relationship with a display vender. With IDC each consortium member achieves a close relationship. Consortium members enjoy cost effective access to high performance, industry standard sized LCD panels, and modified commercial displays with 100 degree C clearing points and portrait configurations. Consortium members also enjoy proposal support, technical support and long-term support.
Velázquez, Yolanda Flores; Nacheva, Petia Mijaylova
2017-03-01
The biodegradation of fluoxetine, mefenamic acid, and metoprolol using ammonium-nitrite-oxidizing consortium, nitrite-oxidizing consortium, and heterotrophic biomass was evaluated in batch tests applying different retention times. The ammonium-nitrite-oxidizing consortium presented the highest biodegradation percentages for mefenamic acid and metoprolol, of 85 and 64% respectively. This consortium was also capable to biodegrade 79% of fluoxetine. The heterotrophic consortium showed the highest ability to biodegrade fluoxetine reaching 85%, and it also had a high potential for biodegrading mefenamic acid and metoprolol, of 66 and 58% respectively. The nitrite-oxidizing consortium presented the lowest biodegradation of the three pharmaceuticals, of less than 48%. The determination of the selected pharmaceuticals in the dissolved phase and in the biomass indicated that biodegradation was the major removal mechanism of the three compounds. Based on the obtained results, the biodegradation kinetics was adjusted to pseudo-first-order for the three pharmaceuticals. The values of k biol for fluoxetine, mefenamic acid, and metoprolol determined with the three consortiums indicated that ammonium-nitrite-oxidizing and heterotrophic biomass allow a partial biodegradation of the compounds, while no substantial biodegradation can be expected using nitrite-oxidizing consortium. Metoprolol was the less biodegradable compound. The sorption of fluoxetine and mefenamic acid onto biomass had a significant contribution for their removal (6-14%). The lowest sorption coefficients were obtained for metoprolol indicating that the sorption onto biomass is poor (3-4%), and the contribution of this process to the global removal can be neglected.
ICONE: An International Consortium of Neuro Endovascular Centres.
Raymond, J; White, P; Kallmes, D F; Spears, J; Marotta, T; Roy, D; Guilbert, F; Weill, A; Nguyen, T; Molyneux, A J; Cloft, H; Cekirge, S; Saatci, I; Bracard, S; Meder, J F; Moret, J; Cognard, C; Qureshi, A I; Turk, A S; Berenstein, A
2008-06-30
The proliferation of new endovascular devices and therapeutic strategies calls for a prudentand rational evaluation of their clinical benefit. This evaluation must be done in an effective manner and in collaboration with industry. Such research initiative requires organisation a land methodological support to survive and thrive in a competitive environment. We propose the formation of an international consortium, an academic alliance committed to the pursuit of effective neurovascular therapies. Such a consortium would be dedicated to the designand execution of basic science, device developmentand clinical trials. The Consortium is owned and operated by its members. Members are international leaders in neurointerventional research and clinical practice. The Consortium brings competency, knowledge, and expertise to industry as well as to its membership across aspectrum of research initiatives such as: expedited review of clinical trials, protocol development, surveys and systematic reviews; laboratory expertise and support for research design and grant applications to public agencies. Once objectives and protocols are approved, the Consortium provides a stable network of centers capable of timely realization of clinical trials or pre clinical investigations in an optimal environment. The Consortium is a non-profit organization. The potential revenue generated from clientsponsored financial agreements will be redirected to the academic and research objectives of the organization. The Consortium wishes to work inconcert with industry, to support emerging trends in neurovascular therapeutic development. The Consortium is a realistic endeavour optimally structured to promote excellence through scientific appraisal of our treatments, and to accelerate technical progress while maximizing patients' safety and welfare.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Planning and Negotiation Grants Advance Planning Grant Funding § 1000.54 How will a Tribe/Consortium know... Director will notify the Tribe/Consortium by letter whether it has been selected to receive an advance... 25 Indians 2 2010-04-01 2010-04-01 false How will a Tribe/Consortium know whether or not it has...
Genomes OnLine Database (GOLD) v.6: data updates and feature enhancements
Mukherjee, Supratim; Stamatis, Dimitri; Bertsch, Jon; Ovchinnikova, Galina; Verezemska, Olena; Isbandi, Michelle; Thomas, Alex D.; Ali, Rida; Sharma, Kaushal; Kyrpides, Nikos C.; Reddy, T. B. K.
2017-01-01
The Genomes Online Database (GOLD) (https://gold.jgi.doe.gov) is a manually curated data management system that catalogs sequencing projects with associated metadata from around the world. In the current version of GOLD (v.6), all projects are organized based on a four level classification system in the form of a Study, Organism (for isolates) or Biosample (for environmental samples), Sequencing Project and Analysis Project. Currently, GOLD provides information for 26 117 Studies, 239 100 Organisms, 15 887 Biosamples, 97 212 Sequencing Projects and 78 579 Analysis Projects. These are integrated with over 312 metadata fields from which 58 are controlled vocabularies with 2067 terms. The web interface facilitates submission of a diverse range of Sequencing Projects (such as isolate genome, single-cell genome, metagenome, metatranscriptome) and complex Analysis Projects (such as genome from metagenome, or combined assembly from multiple Sequencing Projects). GOLD provides a seamless interface with the Integrated Microbial Genomes (IMG) system and supports and promotes the Genomic Standards Consortium (GSC) Minimum Information standards. This paper describes the data updates and additional features added during the last two years. PMID:27794040
A rat RNA-Seq transcriptomic BodyMap across 11 organs and 4 developmental stages
Yu, Ying; Fuscoe, James C.; Zhao, Chen; Guo, Chao; Jia, Meiwen; Qing, Tao; Bannon, Desmond I.; Lancashire, Lee; Bao, Wenjun; Du, Tingting; Luo, Heng; Su, Zhenqiang; Jones, Wendell D.; Moland, Carrie L.; Branham, William S.; Qian, Feng; Ning, Baitang; Li, Yan; Hong, Huixiao; Guo, Lei; Mei, Nan; Shi, Tieliu; Wang, Kevin Y.; Wolfinger, Russell D.; Nikolsky, Yuri; Walker, Stephen J.; Duerksen-Hughes, Penelope; Mason, Christopher E.; Tong, Weida; Thierry-Mieg, Jean; Thierry-Mieg, Danielle; Shi, Leming; Wang, Charles
2014-01-01
The rat has been used extensively as a model for evaluating chemical toxicities and for understanding drug mechanisms. However, its transcriptome across multiple organs, or developmental stages, has not yet been reported. Here we show, as part of the SEQC consortium efforts, a comprehensive rat transcriptomic BodyMap created by performing RNA-Seq on 320 samples from 11 organs of both sexes of juvenile, adolescent, adult and aged Fischer 344 rats. We catalogue the expression profiles of 40,064 genes, 65,167 transcripts, 31,909 alternatively spliced transcript variants and 2,367 non-coding genes/non-coding RNAs (ncRNAs) annotated in AceView. We find that organ-enriched, differentially expressed genes reflect the known organ-specific biological activities. A large number of transcripts show organ-specific, age-dependent or sex-specific differential expression patterns. We create a web-based, open-access rat BodyMap database of expression profiles with crosslinks to other widely used databases, anticipating that it will serve as a primary resource for biomedical research using the rat model. PMID:24510058
Oil Production by a Consortium of Oleaginous Microorganisms grown on primary effluent wastewater
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Jacqueline; Hetrick, Mary; French, Todd
Municipal wastewater could be a potential growth medium that has not been considered for cultivating oleaginous microorganisms. This study is designed to determine if a consortium of oleaginous microorganism can successfully compete for carbon and other nutrients with the indigenous microorganisms contained in primary effluent wastewater. RESULTS: The oleaginous consortium inoculated with indigenous microorganisms reached stationary phase within 24 h, reaching a maximum cell concentration of 0.58 g L -1. Water quality post-oleaginous consortium growth reached a maximum chemical oxygen demand (COD) reduction of approximately 81%, supporting the consumption of the glucose within 8 h. The oleaginous consortium increased themore » amount of oil produced per gram by 13% compared with indigenous microorganisms in raw wastewater. Quantitative polymerase chain reaction (qPCR) results show a substantial population increase in bacteria within the first 24 h when the consortium is inoculated into raw wastewater. This result, along with the fatty acid methyl esters (FAMEs) results, suggests that conditions tested were not sufficient for the oleaginous consortium to compete with the indigenous microorganisms.« less
Mejias Carpio, Isis E; Franco, Diego Castillo; Zanoli Sato, Maria Inês; Sakata, Solange; Pellizari, Vivian H; Seckler Ferreira Filho, Sidney; Frigi Rodrigues, Debora
2016-04-15
Understanding the diversity and metal removal ability of microorganisms associated to contaminated aquatic environments is essential to develop metal remediation technologies in engineered environments. This study investigates through 16S rRNA deep sequencing the composition of a biostimulated microbial consortium obtained from the polluted Tietê River in São Paulo, Brazil. The bacterial diversity of the biostimulated consortium obtained from the contaminated water and sediment was compared to the original sample. The results of the comparative sequencing analyses showed that the biostimulated consortium and the natural environment had γ-Proteobacteria, Firmicutes, and uncultured bacteria as the major classes of microorganisms. The consortium optimum zinc removal capacity, evaluated in batch experiments, was achieved at pH=5 with equilibrium contact time of 120min, and a higher Zn-biomass affinity (KF=1.81) than most pure cultures previously investigated. Analysis of the functional groups found in the consortium demonstrated that amine, carboxyl, hydroxyl, and phosphate groups present in the consortium cells were responsible for zinc uptake. Copyright © 2016 Elsevier B.V. All rights reserved.
An expression database for roots of the model legume Medicago truncatula under salt stress
2009-01-01
Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/. PMID:19906315
An expression database for roots of the model legume Medicago truncatula under salt stress.
Li, Daofeng; Su, Zhen; Dong, Jiangli; Wang, Tao
2009-11-11
Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.
Design and implementation of a database for Brucella melitensis genome annotation.
De Hertogh, Benoît; Lahlimi, Leïla; Lambert, Christophe; Letesson, Jean-Jacques; Depiereux, Eric
2008-03-18
The genome sequences of three Brucella biovars and of some species close to Brucella sp. have become available, leading to new relationship analysis. Moreover, the automatic genome annotation of the pathogenic bacteria Brucella melitensis has been manually corrected by a consortium of experts, leading to 899 modifications of start sites predictions among the 3198 open reading frames (ORFs) examined. This new annotation, coupled with the results of automatic annotation tools of the complete genome sequences of the B. melitensis genome (including BLASTs to 9 genomes close to Brucella), provides numerous data sets related to predicted functions, biochemical properties and phylogenic comparisons. To made these results available, alphaPAGe, a functional auto-updatable database of the corrected sequence genome of B. melitensis, has been built, using the entity-relationship (ER) approach and a multi-purpose database structure. A friendly graphical user interface has been designed, and users can carry out different kinds of information by three levels of queries: (1) the basic search use the classical keywords or sequence identifiers; (2) the original advanced search engine allows to combine (by using logical operators) numerous criteria: (a) keywords (textual comparison) related to the pCDS's function, family domains and cellular localization; (b) physico-chemical characteristics (numerical comparison) such as isoelectric point or molecular weight and structural criteria such as the nucleic length or the number of transmembrane helix (TMH); (c) similarity scores with Escherichia coli and 10 species phylogenetically close to B. melitensis; (3) complex queries can be performed by using a SQL field, which allows all queries respecting the database's structure. The database is publicly available through a Web server at the following url: http://www.fundp.ac.be/urbm/bioinfo/aPAGe.
An environmental database for Venice and tidal zones
NASA Astrophysics Data System (ADS)
Macaluso, L.; Fant, S.; Marani, A.; Scalvini, G.; Zane, O.
2003-04-01
The natural environment is a complex, highly variable and physically non reproducible system (not in laboratory, nor in a confined territory). Environmental experimental studies are thus necessarily based on field measurements distributed in time and space. Only extensive data collections can provide the representative samples of the system behavior which are essential for scientific advancement. The assimilation of large data collections into accessible archives must necessarily be implemented in electronic databases. In the case of tidal environments in general, and of the Venice lagoon in particular, it is useful to establish a database, freely accessible to the scientific community, documenting the dynamics of such systems and their response to anthropic pressures and climatic variability. At the Istituto Veneto di Scienze, Lettere ed Arti in Venice (Italy) two internet environmental databases has been developed: one collects information regarding in detail the Venice lagoon; the other co-ordinate the research consortium of the "TIDE" EU RTD project, that attends to three different tidal areas: Venice Lagoon (Italy), Morecambe Bay (England), and Forth Estuary (Scotland). The archives may be accessed through the URL: www.istitutoveneto.it. The first one is freely available and applies to anyone is interested. It is continuously updated and has been structured in order to promote documentation concerning Venetian environment and disseminate this information for educational purposes (see "Dissemination" section). The second one is supplied by scientists and engineers working on this tidal system for various purposes (scientific, management, conservation purposes, etc.); it applies to interested researchers and grows with their own contributions. Both intend to promote scientific communication, to contribute to the realization of a distributed information system collecting homogeneous themes, and to initiate the interconnection among databases regarding different kinds of environment.
NASA Astrophysics Data System (ADS)
Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.
2014-05-01
Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).
The minimum information about a genome sequence (MIGS) specification
Field, Dawn; Garrity, George; Gray, Tanya; Morrison, Norman; Selengut, Jeremy; Sterk, Peter; Tatusova, Tatiana; Thomson, Nicholas; Allen, Michael J; Angiuoli, Samuel V; Ashburner, Michael; Axelrod, Nelson; Baldauf, Sandra; Ballard, Stuart; Boore, Jeffrey; Cochrane, Guy; Cole, James; Dawyndt, Peter; De Vos, Paul; dePamphilis, Claude; Edwards, Robert; Faruque, Nadeem; Feldman, Robert; Gilbert, Jack; Gilna, Paul; Glöckner, Frank Oliver; Goldstein, Philip; Guralnick, Robert; Haft, Dan; Hancock, David; Hermjakob, Henning; Hertz-Fowler, Christiane; Hugenholtz, Phil; Joint, Ian; Kagan, Leonid; Kane, Matthew; Kennedy, Jessie; Kowalchuk, George; Kottmann, Renzo; Kolker, Eugene; Kravitz, Saul; Kyrpides, Nikos; Leebens-Mack, Jim; Lewis, Suzanna E; Li, Kelvin; Lister, Allyson L; Lord, Phillip; Maltsev, Natalia; Markowitz, Victor; Martiny, Jennifer; Methe, Barbara; Mizrachi, Ilene; Moxon, Richard; Nelson, Karen; Parkhill, Julian; Proctor, Lita; White, Owen; Sansone, Susanna-Assunta; Spiers, Andrew; Stevens, Robert; Swift, Paul; Taylor, Chris; Tateno, Yoshio; Tett, Adrian; Turner, Sarah; Ussery, David; Vaughan, Bob; Ward, Naomi; Whetzel, Trish; Gil, Ingio San; Wilson, Gareth; Wipat, Anil
2008-01-01
With the quantity of genomic data increasing at an exponential rate, it is imperative that these data be captured electronically, in a standard format. Standardization activities must proceed within the auspices of open-access and international working bodies. To tackle the issues surrounding the development of better descriptions of genomic investigations, we have formed the Genomic Standards Consortium (GSC). Here, we introduce the minimum information about a genome sequence (MIGS) specification with the intent of promoting participation in its development and discussing the resources that will be required to develop improved mechanisms of metadata capture and exchange. As part of its wider goals, the GSC also supports improving the ‘transparency’ of the information contained in existing genomic databases. PMID:18464787
Object classification and outliers analysis in the forthcoming Gaia mission
NASA Astrophysics Data System (ADS)
Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.
2010-12-01
Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.
ANNUAL REPORT For Calendar Year 2007 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2008-02-02
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2006 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2007-04-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2009 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2010-03-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 1996 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
1997-01-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
Midwest Transportation Consortium : 2006-2007 annual report.
DOT National Transportation Integrated Search
2007-01-01
Introduction: The Midwest Transportation Consortium (MTC) began year 8 by having the funding it receives from the Research and Innovative Technology Administration doubled, and by losing its regional grant to a consortium led by the University of Neb...
ANNUAL REPORT For Calendar Year 2008 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2009-04-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2002 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2003-11-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2003 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2005-09-22
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2004 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2005-12-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2001 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2002-12-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
ANNUAL REPORT For Calendar Year 2005 : NEW ENGLAND TRANSPORTATION CONSORTIUM
DOT National Transportation Integrated Search
2006-08-01
The New England Transportation Consortium (NETC) is a cooperative effort of the transportation agencies of the six New England States. Through the Consortium, the states pool professional, academic and financial resources for transportation research ...
Establishing a Consortium for the Study of Rare Diseases: The Urea Cycle Disorders Consortium
Seminara, Jennifer; Tuchman, Mendel; Krivitzky, Lauren; Krischer, Jeffrey; Lee, Hye-Seung; LeMons, Cynthia; Baumgartner, Matthias; Cederbaum, Stephen; Diaz, George A.; Feigenbaum, Annette; Gallagher, Renata C.; Harding, Cary O.; Kerr, Douglas S.; Lanpher, Brendan; Lee, Brendan; Lichter-Konecki, Uta; McCandless, Shawn E.; Merritt, J. Lawrence; Oster-Granite, Mary Lou; Seashore, Margretta R.; Stricker, Tamar; Summar, Marshall; Waisbren, Susan; Yudkoff, Marc; Batshaw, Mark L.
2010-01-01
The Urea Cycle Disorders Consortium (UCDC) was created as part of a larger network established by the National Institutes of Health to study rare diseases. This paper reviews the UCDC’s accomplishments over the first six years, including how the Consortium was developed and organized, clinical research studies initiated, and the importance of creating partnerships with patient advocacy groups, philanthropic foundations and biotech and pharmaceutical companies. PMID:20188616
Northeast Artificial Intelligence Consortium (NAIC) Review of Technical Tasks. Volume 2, Part 2.
1987-07-01
A-A19 774 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUN (MIC) 1/5 YVIEN OF TEOICR. T.. (U) NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM SYRACUSE MY J...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM (NAIC) *p,* ~ Review of Technical Tasks ,.. 12 PERSONAL AUTHOR(S) (See reverse) . P VI J.F. Allen, P.B. Berra...See reverse) /" I ABSTRACT (Coninue on ’.wrse if necessary and identify by block number) % .. *. -. ’ The Northeast Artificial Intelligence Consortium
1989-10-01
Northeast Aritificial Intelligence Consortium (NAIC). i Table of Contents Execu tive Sum m ary...o g~nIl ’vLr COPY o~ T- RADC-TR-89-259, Vol XI (of twelve) N Interim Report SOctober 1989 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT...ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (If applicable) Intelligence Consortium (NAIC) . Rome Air Development
Uzuner, Halil; Bauer, Rudolf; Fan, Tai-Ping; Guo, De-An; Dias, Alberto; El-Nezami, Hani; Efferth, Thomas; Williamson, Elizabeth M; Heinrich, Michael; Robinson, Nicola; Hylands, Peter J; Hendry, Bruce M; Cheng, Yung-Chi; Xu, Qihe
2012-04-10
GP-TCM is the 1st EU-funded Coordination Action consortium dedicated to traditional Chinese medicine (TCM) research. This paper aims to summarise the objectives, structure and activities of the consortium and introduces the position of the consortium regarding good practice, priorities, challenges and opportunities in TCM research. Serving as the introductory paper for the GP-TCM Journal of Ethnopharmacology special issue, this paper describes the roadmap of this special issue and reports how the main outputs of the ten GP-TCM work packages are integrated, and have led to consortium-wide conclusions. Literature studies, opinion polls and discussions among consortium members and stakeholders. By January 2012, through 3 years of team building, the GP-TCM consortium had grown into a large collaborative network involving ∼200 scientists from 24 countries and 107 institutions. Consortium members had worked closely to address good practice issues related to various aspects of Chinese herbal medicine (CHM) and acupuncture research, the focus of this Journal of Ethnopharmacology special issue, leading to state-of-the-art reports, guidelines and consensus on the application of omics technologies in TCM research. In addition, through an online survey open to GP-TCM members and non-members, we polled opinions on grand priorities, challenges and opportunities in TCM research. Based on the poll, although consortium members and non-members had diverse opinions on the major challenges in the field, both groups agreed that high-quality efficacy/effectiveness and mechanistic studies are grand priorities and that the TCM legacy in general and its management of chronic diseases in particular represent grand opportunities. Consortium members cast their votes of confidence in omics and systems biology approaches to TCM research and believed that quality and pharmacovigilance of TCM products are not only grand priorities, but also grand challenges. Non-members, however, gave priority to integrative medicine, concerned on the impact of regulation of TCM practitioners and emphasised intersectoral collaborations in funding TCM research, especially clinical trials. The GP-TCM consortium made great efforts to address some fundamental issues in TCM research, including developing guidelines, as well as identifying priorities, challenges and opportunities. These consortium guidelines and consensus will need dissemination, validation and further development through continued interregional, interdisciplinary and intersectoral collaborations. To promote this, a new consortium, known as the GP-TCM Research Association, is being established to succeed the 3-year fixed term FP7 GP-TCM consortium and will be officially launched at the Final GP-TCM Congress in Leiden, the Netherlands, in April 2012. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Model based rib-cage unfolding for trauma CT
NASA Astrophysics Data System (ADS)
von Berg, Jens; Klinder, Tobias; Lorenz, Cristian
2018-03-01
A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.
Baig, Sheharyar S; Strong, Mark; Rosser, Elisabeth; Taverner, Nicola V; Glew, Ruth; Miedzybrodzka, Zosia; Clarke, Angus; Craufurd, David; Quarrell, Oliver W
2016-01-01
Huntington's disease (HD) is a progressive neurodegenerative condition. At-risk individuals have accessed predictive testing via direct mutation testing since 1993. The UK Huntington's Prediction Consortium has collected anonymised data on UK predictive tests, annually, from 1993 to 2014: 9407 predictive tests were performed across 23 UK centres. Where gender was recorded, 4077 participants were male (44.3%) and 5122 were female (55.7%). The median age of participants was 37 years. The most common reason for predictive testing was to reduce uncertainty (70.5%). Of the 8441 predictive tests on individuals at 50% prior risk, 4629 (54.8%) were reported as mutation negative and 3790 (44.9%) were mutation positive, with 22 (0.3%) in the database being uninterpretable. Using a prevalence figure of 12.3 × 10−5, the cumulative uptake of predictive testing in the 50% at-risk UK population from 1994 to 2014 was estimated at 17.4% (95% CI: 16.9–18.0%). We present the largest study conducted on predictive testing in HD. Our findings indicate that the vast majority of individuals at risk of HD (>80%) have not undergone predictive testing. Future therapies in HD will likely target presymptomatic individuals; therefore, identifying the at-risk population whose gene status is unknown is of significant public health value. PMID:27165004
Papachristou, Georgios I.; Machicado, Jorge D.; Stevens, Tyler; Goenka, Mahesh Kumar; Ferreira, Miguel; Gutierrez, Silvia C.; Singh, Vikesh K.; Kamal, Ayesha; Gonzalez-Gonzalez, Jose A.; Pelaez-Luna, Mario; Gulla, Aiste; Zarnescu, Narcis O.; Triantafyllou, Konstantinos; Barbu, Sorin T.; Easler, Jeffrey; Ocampo, Carlos; Capurso, Gabriele; Archibugi, Livia; Cote, Gregory A.; Lambiase, Louis; Kochhar, Rakesh; Chua, Tiffany; Tiwari, Subhash Ch.; Nawaz, Haq; Park, Walter G.; de-Madaria, Enrique; Lee, Peter J.; Wu, Bechien U.; Greer, Phil J.; Dugum, Mohannad; Koutroumpakis, Efstratios; Akshintala, Venkata; Gougol, Amir
2017-01-01
Background We have established a multicenter international consortium to better understand the natural history of acute pancreatitis (AP) worldwide and to develop a platform for future randomized clinical trials. Methods The AP patient registry to examine novel therapies in clinical experience (APPRENTICE) was formed in July 2014. Detailed web-based questionnaires were then developed to prospectively capture information on demographics, etiology, pancreatitis history, comorbidities, risk factors, severity biomarkers, severity indices, health-care utilization, management strategies, and outcomes of AP patients. Results Between November 2015 and September 2016, a total of 20 sites (8 in the United States, 5 in Europe, 3 in South America, 2 in Mexico and 2 in India) prospectively enrolled 509 AP patients. All data were entered into the REDCap (Research Electronic Data Capture) database by participating centers and systematically reviewed by the coordinating site (University of Pittsburgh). The approaches and methodology are described in detail, along with an interim report on the demographic results. Conclusion APPRENTICE, an international collaboration of tertiary AP centers throughout the world, has demonstrated the feasibility of building a large, prospective, multicenter patient registry to study AP. Analysis of the collected data may provide a greater understanding of AP and APPRENTICE will serve as a future platform for randomized clinical trials. PMID:28042246
Jacewicz, Renata; Ossowski, Andrzej; Ławrynowicz, Olgierd; Jędrzejczyk, Maciej; Prośniak, Adam; Bąbol-Pokora, Katarzyna; Diepenbroek, Marta; Szargut, Maria; Zielińska, Grażyna; Berent, Jarosław
2017-01-01
It can be reasonably assumed that remains exhumed in 2012 and 2013 during archaeological explorations conducted in the Lućmierz Forest, an important area on the map of the German Nazi terror in the region of Lodz (Poland), are in fact the remains of a hundred Poles murdered by the Nazis in Zgierz on March 20, 1942. By virtue of a decision of the Polish Institute of National Remembrance's Commission for the Prosecution of Crimes Against the Polish Nation, the verification of this research hypothesis was entrusted to SIGO (Network for Genetic Identification of Victims) Consortium appointed by virtue of an agreement of December 11, 2015. The Consortium is an extension of the PBGOT (Polish Genetic Database of Totalitarianisms Victims). So far, the researchers have retrieved 14 DNA profiles from among the examined remains, including 12 male and 2 female profiles. Furthermore, 12 DNA profiles of the victims' family members have been collected. Due to the fact that next-of-kin relatives of the victims of the Zgierz massacre are of advanced age, it is of key importance to collect genetic material as soon as possible from the other surviving family members, identified on the basis of a list of victims that has been nearly completely compiled by the Polish Institute of National Remembrance (IPN) and is presented in this paper.
NASA Systems Engineering Research Consortium: Defining the Path to Elegance in Systems
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Farrington, Phillip A.
2016-01-01
The NASA Systems Engineering Research Consortium was formed at the end of 2010 to study the approaches to producing elegant systems on a consistent basis. This has been a transformative study looking at the engineering and organizational basis of systems engineering. The consortium has engaged in a variety of research topics to determine the path to elegant systems. In the second year of the consortium, a systems engineering framework emerged which structured the approach to systems engineering and guided our research. This led in the third year to set of systems engineering postulates that the consortium is continuing to refine. The consortium has conducted several research projects that have contributed significantly to the understanding of systems engineering. The consortium has surveyed the application of the NASA 17 systems engineering processes, explored the physics and statistics of systems integration, and considered organizational aspects of systems engineering discipline integration. The systems integration methods have included system exergy analysis, Akaike Information Criteria (AIC), State Variable Analysis, Multidisciplinary Coupling Analysis (MCA), Multidisciplinary Design Optimization (MDO), System Cost Modelling, System Robustness, and Value Modelling. Organizational studies have included the variability of processes in change evaluations, margin management within the organization, information theory of board structures, social categorization of unintended consequences, and initial looks at applying cognitive science to systems engineering. Consortium members have also studied the bidirectional influence of policy and law with systems engineering.
NASA Systems Engineering Research Consortium: Defining the Path to Elegance in Systems
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Farrington, Phillip A.
2016-01-01
The NASA Systems Engineering Research Consortium was formed at the end of 2010 to study the approaches to producing elegant systems on a consistent basis. This has been a transformative study looking at the engineering and organizational basis of systems engineering. The consortium has engaged in a variety of research topics to determine the path to elegant systems. In the second year of the consortium, a systems engineering framework emerged which structured the approach to systems engineering and guided our research. This led in the third year to set of systems engineering postulates that the consortium is continuing to refine. The consortium has conducted several research projects that have contributed significantly to the understanding of systems engineering. The consortium has surveyed the application of the NASA 17 systems engineering processes, explored the physics and statistics of systems integration, and considered organizational aspects of systems engineering discipline integration. The systems integration methods have included system energy analysis, Akaike Information Criteria (AIC), State Variable Analysis, Multidisciplinary Coupling Analysis (MCA), Multidisciplinary Design Optimization (MDO), System Cost Modeling, System Robustness, and Value Modeling. Organizational studies have included the variability of processes in change evaluations, margin management within the organization, information theory of board structures, social categorization of unintended consequences, and initial looks at applying cognitive science to systems engineering. Consortium members have also studied the bidirectional influence of policy and law with systems engineering.
Xiong, Jiu-Qiang; Kurade, Mayur B; Jeon, Byong-Hun
2017-07-01
Enrofloxacin (ENR), a fluoroquinolone antibiotic, has gained big scientific concern due to its ecotoxicity on aquatic microbiota. The ecotoxicity and removal of ENR by five individual microalgae species and their consortium were studied to correlate the behavior and interaction of ENR in natural systems. The individual microalgal species (Scenedesmus obliquus, Chlamydomonas mexicana, Chlorella vulgaris, Ourococcus multisporus, Micractinium resseri) and their consortium could withstand high doses of ENR (≤1 mg L -1 ). Growth inhibition (68-81%) of the individual microalgae species and their consortium was observed in ENR (100 mg L -1 ) compared to control after 11 days of cultivation. The calculated 96 h EC 50 of ENR for individual microalgae species and microalgae consortium was 9.6-15.0 mg ENR L -1 . All the microalgae could recover from the toxicity of high concentrations of ENR during cultivation. The biochemical characteristics (total chlorophyll, carotenoid, and malondialdehyde) were significantly influenced by ENR (1-100 mg L -1 ) stress. The individual microalgae species and microalgae consortium removed 18-26% ENR at day 11. Although the microalgae consortium showed a higher sensitivity (with lower EC 50 ) toward ENR than the individual microalgae species, the removal efficiency of ENR by the constructed microalgae consortium was comparable to that of the most effective microalgal species. Copyright © 2017 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-04-01
.../Consortium should immediately: (a) Inform the Assistant Solicitor, Procurement and Patents, Office of the...? 1000.283 Section 1000.283 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE...
24 CFR 943.128 - How does a consortium carry out planning and reporting functions?
Code of Federal Regulations, 2010 CFR
2010-04-01
... HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT PUBLIC HOUSING AGENCY CONSORTIA AND JOINT VENTURES... the consortium agreement, the consortium must submit joint five-year Plans and joint Annual Plans for... the joint PHA Plan. ...
THE FEDERAL INTEGRATED BIOTREATMENT RESEARCH CONSORTIUM (FLASK TO FIELD)
The Federal Integrated Biotreatment Research Consortium (Flask to Field) represented a 7-year concerted effort by several research laboratories to develop bioremediation technologies for contaminated DoD sites. The consortium structure consisted of a director and four thrust are...
Code of Federal Regulations, 2013 CFR
2013-04-01
.../Consortium should immediately: (a) Inform the Assistant Solicitor, Procurement and Patents, Office of the...? 1000.283 Section 1000.283 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE...
Code of Federal Regulations, 2012 CFR
2012-04-01
.../Consortium should immediately: (a) Inform the Assistant Solicitor, Procurement and Patents, Office of the...? 1000.283 Section 1000.283 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE...
Code of Federal Regulations, 2011 CFR
2011-04-01
.../Consortium should immediately: (a) Inform the Assistant Solicitor, Procurement and Patents, Office of the...? 1000.283 Section 1000.283 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE...
Code of Federal Regulations, 2014 CFR
2014-04-01
.../Consortium should immediately: (a) Inform the Assistant Solicitor, Procurement and Patents, Office of the...? 1000.283 Section 1000.283 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE...
Evaluating robustness of a diesel-degrading bacterial consortium isolated from contaminated soil.
Sydow, Mateusz; Owsianiak, Mikołaj; Szczepaniak, Zuzanna; Framski, Grzegorz; Smets, Barth F; Ławniczak, Łukasz; Lisiecki, Piotr; Szulc, Alicja; Cyplik, Paweł; Chrzanowski, Łukasz
2016-12-25
It is not known whether diesel-degrading bacterial communities are structurally and functionally robust when exposed to different hydrocarbon types. Here, we exposed a diesel-degrading consortium to model either alkanes, cycloalkanes or aromatic hydrocarbons as carbon sources to study its structural resistance. The structural resistance was low, with changes in relative abundances of up to four orders of magnitude, depending on hydrocarbon type and bacterial taxon. This low resistance is explained by the presence of hydrocarbon-degrading specialists in the consortium and differences in growth kinetics on individual hydrocarbons. However, despite this low resistance, structural and functional resilience were high, as verified by re-exposing the hydrocarbon-perturbed consortium to diesel fuel. The high resilience is either due to the short exposure time, insufficient for permanent changes in consortium structure and function, or the ability of some consortium members to be maintained during exposure on degradation intermediates produced by other members. Thus, the consortium is expected to cope with short-term exposures to narrow carbon feeds, while maintaining its structural and functional integrity, which remains an advantage over biodegradation approaches using single species cultures. Copyright © 2016 Elsevier B.V. All rights reserved.
Prebiotics Mediate Microbial Interactions in a Consortium of the Infant Gut Microbiome.
Medina, Daniel A; Pinto, Francisco; Ovalle, Aline; Thomson, Pamela; Garrido, Daniel
2017-10-04
Composition of the gut microbiome is influenced by diet. Milk or formula oligosaccharides act as prebiotics, bioactives that promote the growth of beneficial gut microbes. The influence of prebiotics on microbial interactions is not well understood. Here we investigated the transformation of prebiotics by a consortium of four representative species of the infant gut microbiome, and how their interactions changed with dietary substrates. First, we optimized a culture medium resembling certain infant gut parameters. A consortium containing Bifidobacterium longum subsp. infantis , Bacteroides vulgatus , Escherichia coli and Lactobacillus acidophilus was grown on fructooligosaccharides (FOS) or 2'-fucosyllactose (2FL) in mono- or co-culture. While Bi. infantis and Ba. vulgatus dominated growth on 2FL, their combined growth was reduced. Besides, interaction coefficients indicated strong competition, especially on FOS. While FOS was rapidly consumed by the consortium, B. infantis was the only microbe displaying significant consumption of 2FL. Acid production by the consortium resembled the metabolism of microorganisms dominating growth in each substrate. Finally, the consortium was tested in a bioreactor, observing similar predominance but more pronounced acid production and substrate consumption. This study indicates that the chemical nature of prebiotics modulate microbial interactions in a consortium of infant gut species.
Manasa, Justen; Lessells, Richard; Rossouw, Theresa; Naidu, Kevindra; Van Vuuren, Cloete; Goedhals, Dominique; van Zyl, Gert; Bester, Armand; Skingsley, Andrew; Stott, Katharine; Danaviah, Siva; Chetty, Terusha; Singh, Lavanya; Moodley, Pravi; Iwuji, Collins; McGrath, Nuala; Seebregts, Christopher J.; de Oliveira, Tulio
2014-01-01
Abstract Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net. Database URL: http://www.bioafrica.net/regadb/ PMID:24504151
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1992-12-31
The member institutions of the Consortium continue to play a significant role in increasing the number of African Americans who enter the environmental professions through the implementation of the Consortium`s RETT Plan for Research, Education, and Technology Transfer. The four major program areas identified in the RETT Plan are as follows: (1) minority outreach and precollege education; (2) undergraduate education and postsecondary training; (3) graduate and postgraduate education and research; and (4) technology transfer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Does a Tribe/Consortium have the right to include.../Consortium have the right to include provisions of Title I of Pub. L. 93-638 in an AFA? Yes, under Pub. L. 104-109, a Tribe/Consortium has the right to include any provision of Title I of Pub. L. 93-638 in an...
Migrating from Informal to Formal Consortium — COSTLI Issues
NASA Astrophysics Data System (ADS)
Birdie, C.; Patil, Y. M.
2010-10-01
There are many models of library consortia which have come into existence due to various reasons and compulsions. FORSA (Forum for Resource Sharing in Astronomy) is an informal consortium born from the links between academic institutions specializing in astronomy in India. FORSA is a cooperative venture initiated by library professionals. Though this consortium was formed mainly for inter-lending activities and bibliographic access, it has matured over the years to adopt the consortium approach on cooperative acquisitions, due to increased requirements.
Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans
NASA Astrophysics Data System (ADS)
Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.
2015-02-01
The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.
Parmar, Chintan; Blezek, Daniel; Estepar, Raul San Jose; Pieper, Steve; Kim, John; Aerts, Hugo J. W. L.
2017-01-01
Purpose Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation. Methods CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours. Results The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10−16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries. Conclusion Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point. PMID:28594880
Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas
2010-06-28
Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.
VCGDB: a dynamic genome database of the Chinese population
2014-01-01
Background The data released by the 1000 Genomes Project contain an increasing number of genome sequences from different nations and populations with a large number of genetic variations. As a result, the focus of human genome studies is changing from single and static to complex and dynamic. The currently available human reference genome (GRCh37) is based on sequencing data from 13 anonymous Caucasian volunteers, which might limit the scope of genomics, transcriptomics, epigenetics, and genome wide association studies. Description We used the massive amount of sequencing data published by the 1000 Genomes Project Consortium to construct the Virtual Chinese Genome Database (VCGDB), a dynamic genome database of the Chinese population based on the whole genome sequencing data of 194 individuals. VCGDB provides dynamic genomic information, which contains 35 million single nucleotide variations (SNVs), 0.5 million insertions/deletions (indels), and 29 million rare variations, together with genomic annotation information. VCGDB also provides a highly interactive user-friendly virtual Chinese genome browser (VCGBrowser) with functions like seamless zooming and real-time searching. In addition, we have established three population-specific consensus Chinese reference genomes that are compatible with mainstream alignment software. Conclusions VCGDB offers a feasible strategy for processing big data to keep pace with the biological data explosion by providing a robust resource for genomics studies; in particular, studies aimed at finding regions of the genome associated with diseases. PMID:24708222
Increasing Sales by Developing Production Consortiums.
ERIC Educational Resources Information Center
Smith, Christopher A.; Russo, Robert
Intended to help rehabilitation facility administrators increase organizational income from manufacturing and/or contracted service sources, this document provides a decision-making model for the development of a production consortium. The document consists of five chapters and two appendices. Chapter 1 defines the consortium concept, explains…
14 CFR 1274.205 - Consortia as recipients.
Code of Federal Regulations, 2010 CFR
2010-01-01
... agreement with a consortium is the consortium's Articles of Collaboration, which is a definitive description of the roles and responsibilities of the consortium's members. The Articles of Collaboration must... the Articles of Collaboration only if they are inclusive of all of the required information. (e) An...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2010 CFR
2010-01-01
... is not formally incorporated must provide a collaboration agreement, commonly referred to as the articles of collaboration, which sets out the rights and responsibilities of each consortium member. This... the consortium's collaboration agreement to ensure that the management plan is sound and that it...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2011 CFR
2011-01-01
... is not formally incorporated must provide a collaboration agreement, commonly referred to as the articles of collaboration, which sets out the rights and responsibilities of each consortium member. This... the consortium's collaboration agreement to ensure that the management plan is sound and that it...
14 CFR 1274.205 - Consortia as recipients.
Code of Federal Regulations, 2011 CFR
2011-01-01
... agreement with a consortium is the consortium's Articles of Collaboration, which is a definitive description of the roles and responsibilities of the consortium's members. The Articles of Collaboration must... the Articles of Collaboration only if they are inclusive of all of the required information. (e) An...
25 CFR 1000.21 - When does a Tribe/Consortium have a “material audit exception”?
Code of Federal Regulations, 2010 CFR
2010-04-01
...-Governance Eligibility § 1000.21 When does a Tribe/Consortium have a “material audit exception”? A Tribe/Consortium has a material audit exception if any of the audits that it submitted under § 1000.17(c...
Consortium List of African-American Materials.
ERIC Educational Resources Information Center
Jordan, Casper L., Ed.
A bibliography is provided of the materials identified by the consortium participating in the African-American Materials Project. Members of the consortium include: Atlanta University, Fisk University, Hampton Institute, North Carolina Central University, South Carolina State College, and Tuskegee Institute. The materials listed were located in…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-25
... Explore Feasibility of Establishing a NIST/Industry Consortium on ``Concrete Rheology: Enabling Metrology... meeting. SUMMARY: The National Institute of Standards and Technology (NIST) invites interested parties to attend a pre-consortium meeting on November 8, [[Page 66041
Recovery of valuable metals from polymetallic mine tailings by natural microbial consortium.
Vardanyan, Narine; Sevoyan, Garegin; Navasardyan, Taron; Vardanyan, Arevik
2018-05-28
Possibilities for the recovery of non-ferrous and precious metals from Kapan polymetallic mine tailings (Armenia) were studied. The aim of this paper was to study the possibilities of bioleaching of samples of concentrated tailings by the natural microbial consortium of drainage water. The extent of extraction of metals from the samples of concentrated tailings by natural microbial consortium reached 41-55% and 53-73% for copper and zinc, respectively. Metal leaching efficiencies of pure culture Leptospirillum ferrooxidans Teg were higher, namely 47-93% and 73-81% for copper and zinc, respectively. The content of gold in solid phase of tailings increased about 7-16% and 2-9% after bio-oxidation process by L. ferrooxidans Teg and natural microbial consortium, respectively. It was shown that bioleaching of the samples of tailings could be performed using the natural consortium of drainage water. However, to increase the intensity of the recovery of valuable metals, natural consortium of drainage water combined with iron-oxidizing L. ferrooxidans Teg has been proposed.
Cai, Jian; Mo, Xiwei; Cheng, Guojun; Du, Dongyun
2015-01-01
A stable aerobic microbial consortium, established by successive subcultivation, was employed to solubilize the solid organic fraction in swine wastewater. In the 30 days' successive biological pretreatments, 30-38% of volatile solids and 19-28% total solids in raw slurry were solubilized after 10 hours at 37 °C. Meanwhile, soluble chemical oxygen demand (COD) and volatile fatty acid increased by 48%-56% and 600%-750%, respectively. Furthermore, the molecular microbial profile of the consortium in successive pretreatment was conducted by denaturing gradient gel electrophoresis (DGGE). The results indicated that bacterial species of the consortium rapidly overgrew the indigenous microbial community of raw water, and showed a stable predominance at the long-term treatment. As a consequence of biological pretreatment, pretreatment shortened digestion time by 50% and increased biogas production by 45% compared to raw water in the anaerobic process. The microbial consortium constructed herein is a potential candidate consortium for biological pretreatment of swine wastewater to enhance biogas production.
NASA Astrophysics Data System (ADS)
Steigies, Christian
2012-07-01
The Neutron Monitor Database project, www.nmdb.eu, has been funded in 2008 and 2009 by the European Commission's 7th framework program (FP7). Neutron monitors (NMs) have been in use worldwide since the International Geophysical Year (IGY) in 1957 and cosmic ray data from the IGY and the improved NM64 NMs has been distributed since this time, but a common data format existed only for data with one hour resolution. This data was first distributed in printed books, later via the World Data Center ftp server. In the 1990's the first NM stations started to record data at higher resolutions (typically 1 minute) and publish in on their webpages. However, every NM station chose their own format, making it cumbersome to work with this distributed data. In NMDB all European and some neighboring NM stations came together to agree on a common format for high-resolution data and made this available via a centralized database. The goal of NMDB is to make all data from all NM stations available in real-time. The original NMDB network has recently been joined by the Bartol Research Institute (Newark DE, USA), the National Autonomous University of Mexico and the North-West University (Potchefstroom, South Africa). The data is accessible to everyone via an easy to use webinterface, but expert users can also directly access the database to build applications like real-time space weather alerts. Even though SQL databases are used today by most webservices (blogs, wikis, social media, e-commerce), the power of an SQL database has not yet been fully realized by the scientific community. In training courses, we are teaching how to make use of NMDB, how to join NMDB, and how to ensure the data quality. The present status of the extended NMDB will be presented. The consortium welcomes further data providers to help increase the scientific contributions of the worldwide neutron monitor network to heliospheric physics and space weather.
eMelanoBase: an online locus-specific variant database for familial melanoma.
Fung, David C Y; Holland, Elizabeth A; Becker, Therese M; Hayward, Nicholas K; Bressac-de Paillerets, Brigitte; Mann, Graham J
2003-01-01
A proportion of melanoma-prone individuals in both familial and non-familial contexts has been shown to carry inactivating mutations in either CDKN2A or, rarely, CDK4. CDKN2A is a complex locus that encodes two unrelated proteins from alternately spliced transcripts that are read in different frames. The alpha transcript (exons 1alpha, 2, and 3) produces the p16INK4A cyclin-dependent kinase inhibitor, while the beta transcript (exons 1beta and 2) is translated as p14ARF, a stabilizing factor of p53 levels through binding to MDM2. Mutations in exon 2 can impair both polypeptides and insertions and deletions in exons 1alpha, 1beta, and 2, which can theoretically generate p16INK4A-p14ARF fusion proteins. No online database currently takes into account all the consequences of these genotypes, a situation compounded by some problematic previous annotations of CDKN2A-related sequences and descriptions of their mutations. As an initiative of the international Melanoma Genetics Consortium, we have therefore established a database of germline variants observed in all loci implicated in familial melanoma susceptibility. Such a comprehensive, publicly accessible database is an essential foundation for research on melanoma susceptibility and its clinical application. Our database serves two types of data as defined by HUGO. The core dataset includes the nucleotide variants on the genomic and transcript levels, amino acid variants, and citation. The ancillary dataset includes keyword description of events at the transcription and translation levels and epidemiological data. The application that handles users' queries was designed in the model-view-controller architecture and was implemented in Java. The object-relational database schema was deduced using functional dependency analysis. We hereby present our first functional prototype of eMelanoBase. The service is accessible via the URL www.wmi.usyd.edu.au:8080/melanoma.html. Copyright 2002 Wiley-Liss, Inc.
40 CFR 35.6010 - Indian Tribe and intertribal consortium eligibility.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Indian Tribe and intertribal consortium... for Superfund Response Actions General § 35.6010 Indian Tribe and intertribal consortium eligibility. (a) Indian Tribes are eligible to receive Superfund Cooperative Agreements only when they are...
40 CFR 35.6010 - Indian Tribe and intertribal consortium eligibility.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Indian Tribe and intertribal consortium... for Superfund Response Actions General § 35.6010 Indian Tribe and intertribal consortium eligibility. (a) Indian Tribes are eligible to receive Superfund Cooperative Agreements only when they are...
40 CFR 35.6010 - Indian Tribe and intertribal consortium eligibility.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Indian Tribe and intertribal consortium... for Superfund Response Actions General § 35.6010 Indian Tribe and intertribal consortium eligibility. (a) Indian Tribes are eligible to receive Superfund Cooperative Agreements only when they are...
40 CFR 35.6010 - Indian Tribe and intertribal consortium eligibility.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Indian Tribe and intertribal consortium... for Superfund Response Actions General § 35.6010 Indian Tribe and intertribal consortium eligibility. (a) Indian Tribes are eligible to receive Superfund Cooperative Agreements only when they are...
15 CFR 918.5 - Eligibility, qualifications, and responsibilities-Sea Grant Regional Consortia.
Code of Federal Regulations, 2010 CFR
2010-01-01
... qualifying areas which are pertinent to the Consortium's program: (1) Leadership. The Sea Grant Regional... Consortium candidate must have created the management organization to carry on a viable and productive... assistance as the consortium may offer, and (iii) to assist others in developing research and management...
International Arid Lands Consortium: A synopsis of accomplishments
Peter F. Ffolliott; Jeffrey O. Dawson; James T. Fisher; Itshack Moshe; Timothy E. Fulbright; W. Carter Johnson; Paul Verburg; Muhammad Shatanawi; Jim P. M. Chamie
2003-01-01
The International Arid Lands Consortium (IALC) was established in 1990 to promote research, education, and training activities related to the development, management, and reclamation of arid and semiarid lands in the Southwestern United States, the Middle East, and elsewhere in the world. The Consortium supports the ecological sustainability and environmentally sound...
32 CFR 37.515 - Must I do anything additional to determine the qualification of a consortium?
Code of Federal Regulations, 2010 CFR
2010-07-01
... SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business... relationship is essential to increase the research project's chances of success. (b) The collaboration... things, the consortium's: (1) Management structure. (2) Method of making payments to consortium members...
Consortia: A Challenge to Institutional Autonomy.
ERIC Educational Resources Information Center
Wood, Herbert H.
The kinds of impact a consortium might have on the operations of its member institutions are presented following an overview of the consortium's challenge to institutional autonomy. Ten attitudinal forms of consortium impact challenging autonomy include: (1) the rest are going ahead so do it anyway, (2) infiltration and multiple loyalities of…
The Neuroscience Peer Review Consortium
Saper, Clifford B; Maunsell, John HR
2009-01-01
As the Neuroscience Peer Review Consortium (NPRC) ends its first year, it is worth looking back to see how the experiment has worked. In order to encourage dissemination of the details outlined in this Editorial, it will also be published in other journals in the Neuroscience Peer Review Consortium. PMID:19284614
Policy Report of the Physician Consortium on Substance Abuse Education.
ERIC Educational Resources Information Center
Lewis, David C.; Faggett, Walter L.
This report contains the recommendations of the Physician Consortium for significantly improving medical education and training to enhance the physician's role in early identification, treatment, and prevention of substance abuse. In addition, the consortium subcommittees report on their examination of substance abuse treatment needs of ethnic and…
77 FR 38770 - Notice of Consortium on “nSoft Consortium”
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
... DEPARTMENT OF COMMERCE National Institute of Standards and Technology Notice of Consortium on ``n...: NIST will form the ``nSoft Consortium'' to advance and transfer neutron based measurement methods for soft materials manufacturing. The goals of nSoft are to develop neutron- based measurements that...
78 FR 47674 - Genome in a Bottle Consortium-Progress and Planning Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... quantitative performance metrics for confidence in variant calling. These standards and quantitative..., reproducible research and regulated applications in the clinic. On April 13, 2012, NIST convened the workshop... consortium. No proprietary information will be shared as part of the consortium, and all research results...
Targets of Opportunity: Strategies for Managing a Staff Development Consortium.
ERIC Educational Resources Information Center
Parsons, Michael H.
The Appalachian Staff Development Consortium, comprised of three community colleges and the state college located in Appalachian Maryland, attempts to integrate staff development activities into the operational framework of the sponsoring agencies. The consortium, which is managed by a steering committee composed of one teaching faculty member and…
Cohort Profile: The International Childhood Cardiovascular Cohort (i3C) Consortium
Dwyer, Terence; Sun, Cong; Magnussen, Costan G; Raitakari, Olli T; Schork, Nicholas J; Venn, Alison; Burns, Trudy L; Juonala, Markus; Steinberger, Julia; Sinaiko, Alan R; Prineas, Ronald J; Davis, Patricia H; Woo, Jessica G; Morrison, John A; Daniels, Stephen R; Chen, Wei; Srinivasan, Sathanur R; Viikari, Jorma SA; Berenson, Gerald S
2013-01-01
This is a consortium of large children's cohorts that contain measurements of major cardiovascular disease (CVD) risk factors in childhood and had the ability to follow those cohorts into adulthood. The purpose of this consortium is to enable the pooling of data to increase power, most importantly for the follow-up of CVD events in adulthood. Within the consortium, we hope to be able to obtain data on the independent effects of childhood and early adult levels of CVD risk factors on subsequent CVD occurrence. PMID:22434861
Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S.; Triebel, Dagmar
2017-01-01
With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals. Database URL: http://cetaf.org/cetaf-stable-identifiers PMID:28365724
A Syst-OMICS Approach to Ensuring Food Safety and Reducing the Economic Burden of Salmonellosis.
Emond-Rheault, Jean-Guillaume; Jeukens, Julie; Freschi, Luca; Kukavica-Ibrulj, Irena; Boyle, Brian; Dupont, Marie-Josée; Colavecchio, Anna; Barrere, Virginie; Cadieux, Brigitte; Arya, Gitanjali; Bekal, Sadjia; Berry, Chrystal; Burnett, Elton; Cavestri, Camille; Chapin, Travis K; Crouse, Alanna; Daigle, France; Danyluk, Michelle D; Delaquis, Pascal; Dewar, Ken; Doualla-Bell, Florence; Fliss, Ismail; Fong, Karen; Fournier, Eric; Franz, Eelco; Garduno, Rafael; Gill, Alexander; Gruenheid, Samantha; Harris, Linda; Huang, Carol B; Huang, Hongsheng; Johnson, Roger; Joly, Yann; Kerhoas, Maud; Kong, Nguyet; Lapointe, Gisèle; Larivière, Line; Loignon, Stéphanie; Malo, Danielle; Moineau, Sylvain; Mottawea, Walid; Mukhopadhyay, Kakali; Nadon, Céline; Nash, John; Ngueng Feze, Ida; Ogunremi, Dele; Perets, Ann; Pilar, Ana V; Reimer, Aleisha R; Robertson, James; Rohde, John; Sanderson, Kenneth E; Song, Lingqiao; Stephan, Roger; Tamber, Sandeep; Thomassin, Paul; Tremblay, Denise; Usongo, Valentine; Vincent, Caroline; Wang, Siyun; Weadge, Joel T; Wiedmann, Martin; Wijnands, Lucas; Wilson, Emily D; Wittum, Thomas; Yoshida, Catherine; Youfsi, Khadija; Zhu, Lei; Weimer, Bart C; Goodridge, Lawrence; Levesque, Roger C
2017-01-01
The Salmonella Syst-OMICS consortium is sequencing 4,500 Salmonella genomes and building an analysis pipeline for the study of Salmonella genome evolution, antibiotic resistance and virulence genes. Metadata, including phenotypic as well as genomic data, for isolates of the collection are provided through the Salmonella Foodborne Syst-OMICS database (SalFoS), at https://salfos.ibis.ulaval.ca/. Here, we present our strategy and the analysis of the first 3,377 genomes. Our data will be used to draw potential links between strains found in fresh produce, humans, animals and the environment. The ultimate goals are to understand how Salmonella evolves over time, improve the accuracy of diagnostic methods, develop control methods in the field, and identify prognostic markers for evidence-based decisions in epidemiology and surveillance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Library Services Alliance is a unique multi-type library consortium committed to resource sharing. As a voluntary association of university and governmental laboratory libraries supporting scientific research, the Alliance has become a leader in New Mexico in using cooperative ventures to cost-effectively expand resources supporting their scientific and technical communities. During 1994, the alliance continued to expand on their strategic planning foundation to enhance access to research information for the scientific and technical communities. Significant progress was made in facilitating easy access to the on-line catalogs of member libraries via connections through the Internet. Access to Alliance resources is nowmore » available via the World Wide Web and Gopher, as well as links to other databases and electronic information. This report highlights the accomplishments of the Alliance during calendar year 1994.« less
Predicting Novel Bulk Metallic Glasses via High- Throughput Calculations
NASA Astrophysics Data System (ADS)
Perim, E.; Lee, D.; Liu, Y.; Toher, C.; Gong, P.; Li, Y.; Simmons, W. N.; Levy, O.; Vlassak, J.; Schroers, J.; Curtarolo, S.
Bulk metallic glasses (BMGs) are materials which may combine key properties from crystalline metals, such as high hardness, with others typically presented by plastics, such as easy processability. However, the cost of the known BMGs poses a significant obstacle for the development of applications, which has lead to a long search for novel, economically viable, BMGs. The emergence of high-throughput DFT calculations, such as the library provided by the AFLOWLIB consortium, has provided new tools for materials discovery. We have used this data to develop a new glass forming descriptor combining structural factors with thermodynamics in order to quickly screen through a large number of alloy systems in the AFLOWLIB database, identifying the most promising systems and the optimal compositions for glass formation. National Science Foundation (DMR-1436151, DMR-1435820, DMR-1436268).
Kozo, Justine; Zapata-Garibay, Rogelio; Rangel-Gomez, María Gudelia; Fernandez, April; Hirata-Okamoto, Ricardo; Wooten, Wilma; Vargas-Ojeda, Adriana; Jiménez, Barbara; Zepeda-Cisneros, Hector; Matthews, Charles Edwards
2018-01-01
The California–Baja California border region is one of the most frequently traversed areas in the world with a shared population, environment, and health concerns. The Border Health Consortium of the Californias (the “Consortium”) was formed in 2013 to bring together leadership working in the areas of public health, health care, academia, government, and the non-profit sector, with the goal of aligning efforts to improve health outcomes in the region. The Consortium utilizes a Collective Impact framework which supports a shared vision for a healthy border region, mutually reinforcing activities among member organizations and work groups, and a binational executive committee that ensures continuous communication and progress toward meeting its goals. The Consortium is comprised of four binational work groups which address human immunodeficiency virus, tuberculosis, obesity, and mental health, all mutual priorities in the border region. The Consortium holds two general binational meetings each year alternating between California and Baja California. The work groups meet regularly to share information, resources and provide binational training opportunities. Since inception, the Consortium has been successful in strengthening binational communication, coordination, and collaboration by providing an opportunity for individuals to meet one another, learn about each other systems, and foster meaningful relationships. With binational leadership support and commitment, the Consortium could certainly be replicated in other border jurisdictions both nationally and internationally. The present article describes the background, methodology, accomplishments, challenges, and lessons learned in forming the Consortium. PMID:29404318
De Rycke, M; Goossens, V; Kokkali, G; Meijer-Hoogeveen, M; Coonen, E; Moutou, C
2017-10-01
How does the data collection XIV-XV of the European Society of Human Reproduction and Embryology (ESHRE) PGD Consortium compare with the cumulative data for data collections I-XIII? The 14th and 15th retrospective collection represents valuable data on PGD/PGS cycles, pregnancies and children: the main trend observed is the increased application of array technology at the cost of FISH testing in PGS cycles and in PGD cycles for chromosomal abnormalities. Since 1999, the PGD Consortium has collected, analysed and published 13 previous data sets and an overview of the first 10 years of data collections. Data were collected from each participating centre using a FileMaker Pro database (versions 5-12). Separate predesigned FileMaker Pro files were used for the cycles, pregnancies and baby records. The study documented cycles performed during the calendar years 2011 and 2012 and follow-up of the pregnancies and babies born which resulted from these cycles (until October 2013). Data were submitted by 71 centres (full PGD Consortium members). Records with incomplete or inconsistent data were excluded from the calculations. Corrections, calculations and tables were made by expert co-authors. For data collection XIV-XV, 71 centres reported data for 11 637 cycles with oocyte retrieval (OR), along with details of the follow-up on 2147 pregnancies and 1755 babies born. A total of 1953 cycles to OR were reported for chromosomal abnormalities, 144 cycles to OR for sexing for X-linked diseases, 3445 cycles to OR for monogenic diseases, 6095 cycles to OR for PGS and 38 cycles to OR for social sexing. From 2010 until 2012, the use of arrays for genetic testing increased from 4% to 20% in PGS and from 6% to 13% in PGD cycles for chromosomal abnormalities; the uptake of biopsy at the blastocyst stage (from <1% up to 7%) was only observed in cycles for structural chromosomal abnormalities, alongside the application of array comparative genomic hybridization. The findings apply to the 71 participating centres and may not represent worldwide trends in PGD. The annual data collections provide an important resource for data mining and for following trends in PGD/PGS practice. None. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A Report on the Oregon Consortium for Student Success.
ERIC Educational Resources Information Center
Keyser, John S.; And Others
This report provides an overview of the activities and outcomes of the Oregon Consortium for Student Success during 1980-81. As introductory material notes, the 13 community colleges involved in the consortium were charged with organizing a task force to improve advising and retention strategies for high risk students. The report emphasizes…
The West Virginia Consortium for Faculty and Course Development in International Studies.
ERIC Educational Resources Information Center
Peterson, Sophia; Maxwell, John
The West Virginia Consortium for Faculty and Course Development in International Studies (FACDIS) is described in this report. FACDIS, a consortium of 21 West Virginia institutions of higher education, assists in international studies course development, revision, and enrichment. It also helps faculty remain current in their fields and in new…
The Columbia-Willamette Skill Builders Consortium. Final Performance Report.
ERIC Educational Resources Information Center
Portland Community Coll., OR.
The Columbia-Willamette Skill Builders Consortium was formed in early 1988 in response to a growing awareness of the need for improved workplace literacy training and coordinated service delivery in Northwest Oregon. In June 1990, the consortium received a National Workplace Literacy Program grant to develop and demonstrate such training. The…
24 CFR 943.118 - What is a consortium?
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVELOPMENT PUBLIC HOUSING AGENCY CONSORTIA AND JOINT VENTURES Consortia § 943.118 What is a consortium? A... consortium also submits a joint PHA Plan. The lead agency collects the assistance funds from HUD that would... same fiscal year so that the applicable periods for submission and review of the joint PHA Plan are the...
15 CFR 918.6 - Duration of Sea Grant Regional Consortium designation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Duration of Sea Grant Regional... REGULATIONS SEA GRANTS § 918.6 Duration of Sea Grant Regional Consortium designation. Designation will be made... consistent with the goals of the Act. Continuation of the Sea Grant Regional Consortium designation is...
15 CFR 918.6 - Duration of Sea Grant Regional Consortium designation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Duration of Sea Grant Regional... REGULATIONS SEA GRANTS § 918.6 Duration of Sea Grant Regional Consortium designation. Designation will be made... consistent with the goals of the Act. Continuation of the Sea Grant Regional Consortium designation is...
15 CFR 918.6 - Duration of Sea Grant Regional Consortium designation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Duration of Sea Grant Regional... REGULATIONS SEA GRANTS § 918.6 Duration of Sea Grant Regional Consortium designation. Designation will be made... consistent with the goals of the Act. Continuation of the Sea Grant Regional Consortium designation is...
15 CFR 918.6 - Duration of Sea Grant Regional Consortium designation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Duration of Sea Grant Regional... REGULATIONS SEA GRANTS § 918.6 Duration of Sea Grant Regional Consortium designation. Designation will be made... consistent with the goals of the Act. Continuation of the Sea Grant Regional Consortium designation is...
15 CFR 918.6 - Duration of Sea Grant Regional Consortium designation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Duration of Sea Grant Regional... REGULATIONS SEA GRANTS § 918.6 Duration of Sea Grant Regional Consortium designation. Designation will be made... consistent with the goals of the Act. Continuation of the Sea Grant Regional Consortium designation is...
The Consortium for Higher Education Tax Reform Report
ERIC Educational Resources Information Center
Center for Postsecondary and Economic Success, 2014
2014-01-01
This White Paper presents the work of the Consortium for Higher Education Tax Reform, a partnership funded by the Bill & Melinda Gates Foundation as part of the second phase of its Reimagining Aid Design and Delivery (RADD) initiative. Consortium partners are the Center for Postsecondary and Economic Success at CLASP, the Education Trust, New…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-13
... Production Act of 1993; IMS Global Learning Consortium, Inc. Notice is hereby given that, on September 6....C. 4301 et seq. (``the Act''), INS Global Learning Consortium, Inc. has filed written notifications... Learning Consortium, Inc. intends to file additional written notifications disclosing all changes in...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... Production Act of 1993; IMS Global Learning Consortium, Inc. Notice is hereby given that, on May 9, 2011... seq. (``the Act''), IMS Global Learning Consortium, Inc. has filed written notifications... in this group research project remains open, and IMS Global Learning Consortium, Inc. intends to file...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-04
... Production Act of 1993--INS Global Learning Consortium, Inc. Notice is hereby given that, on April 26, 2010... seq. (``the Act''), INS Global Learning Consortium, Inc. has filed written notifications... Global Learning Consortium, Inc. intends to file additional written notifications disclosing all changes...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... Production Act of 1993--IMS Global Learning Consortium, Inc. Notice is hereby given that, on November 28....C. 4301 et seq. (``the Act''), IMS Global Learning Consortium, Inc. has filed written notifications... Global Learning Consortium, Inc. intends to file additional written notifications disclosing all changes...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... Production Act of 1993--IMS Global Learning Consortium, Inc. Notice is hereby given that, on February 6, 2012... seq. (``the Act''), IMS Global Learning Consortium, Inc. has filed written notifications... Global Learning Consortium, Inc. intends to file additional written notifications disclosing all changes...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-08
... Production Act of 1993--IMS Global Learning Consortium, Inc. Notice is hereby given that, on May 2, 2012... seq. (``the Act''), IMS Global Learning Consortium, Inc. has filed written notifications... remains open, and IMS Global Learning Consortium, Inc. intends to file additional written notifications...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... Production Act of 1993--IMS Global Learning Consortium, Inc. Notice is hereby given that, on July 16, 2012... seq. (``the Act''), IMS Global Learning Consortium, Inc. has filed written notifications.... Membership in this group research project remains open, and IMS Global Learning Consortium, Inc. intends to...
United States Participation in the Pacific Circle Consortium. Final Report.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
The goal of the Pacific Circle Project is to improve international and intercultural understanding among the people and nations of the Pacific. Consortium member countries are Australia, Canada, New Zealand, and the United States. Within the countries are chosen member institutions. Two major types of activities of the consortium are the exchange…
Growth behind the Mirror: The Family Therapy Consortium's Group Process.
ERIC Educational Resources Information Center
Wendorf, Donald J.; And Others
1985-01-01
Charts the development of the Family Therapy Consortium, a group that provides supervision and continuing education in family therapy and explores the peer supervision process at work in the consortium. The focus is on individual and group development, which are seen as complementary aspects of the same growth process. (Author/NRB)
Thavamani, Palanisami; Megharaj, Mallavarapu; Naidu, Ravi
2012-11-01
Bioremediation of polyaromatic hydrocarbons (PAH) contaminated soils in the presence of heavy metals have proved to be difficult and often challenging due to the ability of toxic metals to inhibit PAH degradation by bacteria. In this study, a mixed bacterial culture designated as consortium-5 was isolated from a former manufactured gas plant (MGP) site. The ability of this consortium to utilise HMW PAHs such as pyrene and BaP as a sole carbon source in the presence of toxic metal Cd was demonstrated. Furthermore, this consortium has proven to be effective in degradation of HMW PAHs even from the real long term contaminated MGP soil. Thus, the results of this study demonstrate the great potential of this consortium for field scale bioremediation of PAHs in long term mix contaminated soils such as MGP sites. To our knowledge this is the first study to isolate and characterize metal tolerant HMW PAH degrading bacterial consortium which shows great potential in bioremediation of mixed contaminated soils such as MGP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-07-01
The HBCU/MI ET Consortium was established in January 1990, through a Memorandum of Understanding (MOU) among its member institutions. This group of research oriented Historically Black Colleges and Universities and Minority Institutions (HBCU/MIs) agreed to work together to initiate or revise education programs, develop research partnerships with public and private sector organizations, and promote technology development to address the nation`s critical environmental contamination problems. The Consortium`s Research, Education and Technology Transfer (RETT) Plan became the working agenda. The Consortium is a resource for collaboration among the member institutions and with federal an state agencies, national and federal laboratories, industries, (includingmore » small businesses), majority universities, and two and four-year technical colleges. As a group of 17 institutions geographically located in the southern US, the Consortium is well positioned to reach a diverse group of women and minority populations of African Americans, Hispanics and American Indians. This Report provides a status update on activities and achievements in environmental curriculum development, outreach at the K--12 level, undergraduate and graduate education, research and development, and technology transfer.« less
CDEP Consortium on Ocean Data Assimilation for Seasonal-to-Interannual Prediction (ODASI)
NASA Technical Reports Server (NTRS)
Rienecker, Michele; Zebiak, Stephen; Kinter, James; Behringer, David; Rosati, Antonio; Kaplan, Alexey
2005-01-01
The ODASI consortium is focused activity of the NOAA/OGP/Climate Diagnostics and Experimental Prediction Program with the goal of improving ocean data assimilation methods and their implementations in support of seasonal forecasts with coupled general circulation models. The consortium is undertaking coordinated assimilation experiments, with common forcing data sets and common input data streams. With different assimilation systems and different models, we aim to understand what approach works best in improving forecast skill in the equatorial Pacific. The presentation will provide an overview of the consortium goals and plans and recent results focused towards evaluating data impacts.
Significant oral cancer risk associated with low socioeconomic status.
Warnakulasuriya, Saman
2009-01-01
Searches were made for studies in Medline, Medline In-Process and other Non-indexed Citations Embase, CINAHL, PsychINFO, CAB Abstracts 1973-date, EBM Reviews, ACP Journal Club, Cochrane Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Health Management Information Consortium database and Pubmed. Un-published data were also received from the International Head and Neck Cancer Epidemiology Consortium. Studies were identified independently by two reviewers and were included if their subject was oral and/ or oropharyngeal cancer; they used case-control methodology; gave data regarding socioeconomic status (SES; eg, educational attainment, occupational social classification or income) for both cases and controls; and the odds ratio (OR) for any SES measure was presented or could be calculated. Corresponding authors were contacted where there was an indication that data on oral and/ or oropharyngeal cancers could potentially be obtained from the wider cancer definition or grouping presented in the article, or if SES data were collected but had not been presented in the article. Methodological assessment of selected studies was undertaken. Countries where the study was undertaken were classified according to level of development and income as defined by the World Bank. Where available the adjusted OR (or crude OR) with corresponding 95% confidence intervals (CI) were extracted, or were calculated for low compared with high SES categories. Meta-analyses were performed on the following subgroups: SES measure, age, sex, global region, development level, time-period and lifestyle factor adjustments. Sensitivity analyses were conducted based on study methodological issues. Publication bias was assessed using a funnel plot. Forty-one studies met the inclusion criteria and yielded 15,344 cases and 33,852 controls. Compared with individuals who were in high SES strata, the pooled OR for the risk of developing oral cancer were 1.85 (95% CI, 1.60-2.15; N=37 studies) for individuals with low educational attainment versus 1.84 (95% CI, 1.47-2.31; N=14) for those with low occupational social class versus and 2.41 (95% CI, 1.59-3.65; N=5) for people with low incomes. Subgroup analyses showed that low SES was significantly associated with increased oral cancer risk in high- and lower-income countries, across the world, and remained when adjusting for potential behavioural confounders. Oral cancer risk associated with low SES is significant and related to lifestyle risk factors. These results provide evidence to steer
Buxbaum, Joseph D; Bolshakova, Nadia; Brownfeld, Jessica M; Anney, Richard Jl; Bender, Patrick; Bernier, Raphael; Cook, Edwin H; Coon, Hilary; Cuccaro, Michael; Freitag, Christine M; Hallmayer, Joachim; Geschwind, Daniel; Klauck, Sabine M; Nurnberger, John I; Oliveira, Guiomar; Pinto, Dalila; Poustka, Fritz; Scherer, Stephen W; Shih, Andy; Sutcliffe, James S; Szatmari, Peter; Vicente, Astrid M; Vieland, Veronica; Gallagher, Louise
2014-01-01
There is an urgent need for expanding and enhancing autism spectrum disorder (ASD) samples, in order to better understand causes of ASD. In a unique public-private partnership, 13 sites with extensive experience in both the assessment and diagnosis of ASD embarked on an ambitious, 2-year program to collect samples for genetic and phenotypic research and begin analyses on these samples. The program was called The Autism Simplex Collection (TASC). TASC sample collection began in 2008 and was completed in 2010, and included nine sites from North America and four sites from Western Europe, as well as a centralized Data Coordinating Center. Over 1,700 trios are part of this collection, with DNA from transformed cells now available through the National Institute of Mental Health (NIMH). Autism Diagnostic Interview-Revised (ADI-R) and Autism Diagnostic Observation Schedule-Generic (ADOS-G) measures are available for all probands, as are standardized IQ measures, Vineland Adaptive Behavioral Scales (VABS), the Social Responsiveness Scale (SRS), Peabody Picture Vocabulary Test (PPVT), and physical measures (height, weight, and head circumference). At almost every site, additional phenotypic measures were collected, including the Broad Autism Phenotype Questionnaire (BAPQ) and Repetitive Behavior Scale-Revised (RBS-R), as well as the non-word repetition scale, Communication Checklist (Children's or Adult), and Aberrant Behavior Checklist (ABC). Moreover, for nearly 1,000 trios, the Autism Genome Project Consortium (AGP) has carried out Illumina 1 M SNP genotyping and called copy number variation (CNV) in the samples, with data being made available through the National Institutes of Health (NIH). Whole exome sequencing (WES) has been carried out in over 500 probands, together with ancestry matched controls, and this data is also available through the NIH. Additional WES is being carried out by the Autism Sequencing Consortium (ASC), where the focus is on sequencing complete trios. ASC sequencing for the first 1,000 samples (all from whole-blood DNA) is complete and data will be released in 2014. Data is being made available through NIH databases (database of Genotypes and Phenotypes (dbGaP) and National Database for Autism Research (NDAR)) with DNA released in Dist 11.0. Primary funding for the collection, genotyping, sequencing and distribution of TASC samples was provided by Autism Speaks and the NIH, including the National Institute of Mental Health (NIMH) and the National Human Genetics Research Institute (NHGRI). TASC represents an important sample set that leverages expert sites. Similar approaches, leveraging expert sites and ongoing studies, represent an important path towards further enhancing available ASD samples.
NASA Technical Reports Server (NTRS)
Nall, Marsha M.; Barna, Gerald J.
2009-01-01
The John Glenn Biomedical Engineering Consortium was established by NASA in 2002 to formulate and implement an integrated, interdisciplinary research program to address risks faced by astronauts during long-duration space missions. The consortium is comprised of a preeminent team of Northeast Ohio institutions that include Case Western Reserve University, the Cleveland Clinic, University Hospitals Case Medical Center, The National Center for Space Exploration Research, and the NASA Glenn Research Center. The John Glenn Biomedical Engineering Consortium research is focused on fluid physics and sensor technology that addresses the critical risks to crew health, safety, and performance. Effectively utilizing the unique skills, capabilities and facilities of the consortium members is also of prime importance. Research efforts were initiated with a general call for proposals to the consortium members. The top proposals were selected for funding through a rigorous, peer review process. The review included participation from NASA's Johnson Space Center, which has programmatic responsibility for NASA's Human Research Program. The projects range in scope from delivery of prototype hardware to applied research that enables future development of advanced technology devices. All of the projects selected for funding have been completed and the results are summarized. Because of the success of the consortium, the member institutions have extended the original agreement to continue this highly effective research collaboration through 2011.
Ozone therapy in postgraduate theses in Egypt: systematic review.
AlBedah, Abdullah M N; Khalil, Mohamed K M; Elolemy, Ahmed T; Alrasheid, Mohamed H S; Al Mudaiheem, Abdullah; Elolemy, Tawfik M B
2013-08-01
Systematic reviews of the studies published in the major medical data bases have not shown solid support for the use of ozone therapy. Unpublished or grey literature, including postgraduate theses, may solve this controversy. To review the postgraduate theses published in Egypt in order to assess the clinical safety and effectiveness of ozone therapy in specific medical conditions. The databases of the Egyptian Universities' Library Consortium and the databases of each university were searched for postgraduate theses that evaluated ozone therapy as an intervention for any disease or condition in any age group, compared with any or no other intervention and published before September 2010. A total of 28 quasi trials were included. The theses did not report any safety issues in terms of ozone therapy. With respect to its effectiveness, the studies suggested some benefits of ozone in the treatment of dental infection and recovery, musculoskeletal disorders, diabetes mellitus, chronic diseases, and obstetrics and gynaecology. However, the number of studies included was small and they were of limited quality. There is insufficient evidence to recommend the use of ozone in the treatment of dental infections, in facilitating faster dental recovery after extraction or implantation, in diabetes mellitus, musculoskeletal disorders, or obstetrics and gynaecology.
Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization
NASA Astrophysics Data System (ADS)
Daglis, I. A.; Bourdarie, S.; Khotyaintsev, Y.; Santolik, O.; Horne, R.; Mann, I.; Turner, D.; Anastasiadis, A.; Angelopoulos, V.; Balasis, G.; Chatzichristou, E.; Cully, C.; Georgiou, M.; Glauert, S.; Grison, B.; Kolmasova, I.; Lazaro, D.; Macusova, E.; Maget, V.; Papadimitriou, C.; Ropokis, G.; Sandberg, I.; Usanova, M.
2012-09-01
We present the concept, objectives and expected impact of the MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization) project, which is being implemented by a consortium of seven institutions (five European, one Canadian and one US) with support from the European Community's Seventh Framework Programme. The MAARBLE project employs multi-spacecraft monitoring of the geospace environment, complemented by ground-based monitoring, in order to analyze and assess the physical mechanisms leading to radiation belt particle energization and loss. Particular attention is paid to the role of ULF/VLF waves. A database containing properties of the waves is being created and will be made available to the scientific community. Based on the wave database, a statistical model of the wave activity dependent on the level of geomagnetic activity, solar wind forcing, and magnetospheric region will be developed. Furthermore, we will incorporate multi-spacecraft particle measurements into data assimilation tools, aiming at a new understanding of the causal relationships between ULF/VLF waves and radiation belt dynamics. Data assimilation techniques have been proven to be a valuable tool in the field of radiation belts, able to guide 'the best' estimate of the state of a complex system.
A Four-Dimensional Probabilistic Atlas of the Human Brain
Mazziotta, John; Toga, Arthur; Evans, Alan; Fox, Peter; Lancaster, Jack; Zilles, Karl; Woods, Roger; Paus, Tomas; Simpson, Gregory; Pike, Bruce; Holmes, Colin; Collins, Louis; Thompson, Paul; MacDonald, David; Iacoboni, Marco; Schormann, Thorsten; Amunts, Katrin; Palomero-Gallagher, Nicola; Geyer, Stefan; Parsons, Larry; Narr, Katherine; Kabani, Noor; Le Goualher, Georges; Feidler, Jordan; Smith, Kenneth; Boomsma, Dorret; Pol, Hilleke Hulshoff; Cannon, Tyrone; Kawashima, Ryuta; Mazoyer, Bernard
2001-01-01
The authors describe the development of a four-dimensional atlas and reference system that includes both macroscopic and microscopic information on structure and function of the human brain in persons between the ages of 18 and 90 years. Given the presumed large but previously unquantified degree of structural and functional variance among normal persons in the human population, the basis for this atlas and reference system is probabilistic. Through the efforts of the International Consortium for Brain Mapping (ICBM), 7,000 subjects will be included in the initial phase of database and atlas development. For each subject, detailed demographic, clinical, behavioral, and imaging information is being collected. In addition, 5,800 subjects will contribute DNA for the purpose of determining genotype– phenotype–behavioral correlations. The process of developing the strategies, algorithms, data collection methods, validation approaches, database structures, and distribution of results is described in this report. Examples of applications of the approach are described for the normal brain in both adults and children as well as in patients with schizophrenia. This project should provide new insights into the relationship between microscopic and macroscopic structure and function in the human brain and should have important implications in basic neuroscience, clinical diagnostics, and cerebral disorders. PMID:11522763
EMMA—mouse mutant resources for the international scientific community
Wilkinson, Phil; Sengerova, Jitka; Matteoni, Raffaele; Chen, Chao-Kung; Soulat, Gaetan; Ureta-Vidal, Abel; Fessele, Sabine; Hagn, Michael; Massimi, Marzia; Pickford, Karen; Butler, Richard H.; Marschall, Susan; Mallon, Ann-Marie; Pickard, Amanda; Raspa, Marcello; Scavizzi, Ferdinando; Fray, Martin; Larrigaldie, Vanessa; Leyritz, Johan; Birney, Ewan; Tocchini-Valentini, Glauco P.; Brown, Steve; Herault, Yann; Montoliu, Lluis; de Angelis, Martin Hrabé; Smedley, Damian
2010-01-01
The laboratory mouse is the premier animal model for studying human disease and thousands of mutants have been identified or produced, most recently through gene-specific mutagenesis approaches. High throughput strategies by the International Knockout Mouse Consortium (IKMC) are producing mutants for all protein coding genes. Generating a knock-out line involves huge monetary and time costs so capture of both the data describing each mutant alongside archiving of the line for distribution to future researchers is critical. The European Mouse Mutant Archive (EMMA) is a leading international network infrastructure for archiving and worldwide provision of mouse mutant strains. It operates in collaboration with the other members of the Federation of International Mouse Resources (FIMRe), EMMA being the European component. Additionally EMMA is one of four repositories involved in the IKMC, and therefore the current figure of 1700 archived lines will rise markedly. The EMMA database gathers and curates extensive data on each line and presents it through a user-friendly website. A BioMart interface allows advanced searching including integrated querying with other resources e.g. Ensembl. Other resources are able to display EMMA data by accessing our Distributed Annotation System server. EMMA database access is publicly available at http://www.emmanet.org. PMID:19783817
Lim, Regine M; Silver, Ari J; Silver, Maxwell J; Borroto, Carlos; Spurrier, Brett; Petrossian, Tanya C; Larson, Jessica L; Silver, Lee M
2016-02-01
Carrier screening for mutations contributing to cystic fibrosis (CF) is typically accomplished with panels composed of variants that are clinically validated primarily in patients of European descent. This approach has created a static genetic and phenotypic profile for CF. An opportunity now exists to reevaluate the disease profile of CFTR at a global population level. CFTR allele and genotype frequencies were obtained from a nonpatient cohort with more than 60,000 unrelated personal genomes collected by the Exome Aggregation Consortium. Likely disease-contributing mutations were identified with the use of public database annotations and computational tools. We identified 131 previously described and likely pathogenic variants and another 210 untested variants with a high probability of causing protein damage. None of the current genetic screening panels or existing CFTR mutation databases covered a majority of deleterious variants in any geographical population outside of Europe. Both clinical annotation and mutation coverage by commercially available targeted screening panels for CF are strongly biased toward detection of reproductive risk in persons of European descent. South and East Asian populations are severely underrepresented, in part because of a definition of disease that preferences the phenotype associated with European-typical CFTR alleles.
ERIC Educational Resources Information Center
Mueller, Mildred K.
Presenting information regarding second-year activities of the Consortium of States to Upgrade Indian Education through State Departments of Education, this report includes the following: acknowledgments; data re: funding; background; second-year consortium activities; participation (a map depicting the 13-state membership and a list of Consortium…
25 CFR 1000.171 - When should a Tribe/Consortium submit a letter of interest?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false When should a Tribe/Consortium submit a letter of... of Initial Annual Funding Agreements § 1000.171 When should a Tribe/Consortium submit a letter of... BIA by April 1 for fiscal year Tribes/Consortia or May 1 for calendar year Tribes/Consortia. ...
25 CFR 1000.21 - When does a Tribe/Consortium have a “material audit exception”?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false When does a Tribe/Consortium have a âmaterial audit... SELF-DETERMINATION AND EDUCATION ACT Selection of Additional Tribes for Participation in Tribal Self-Governance Eligibility § 1000.21 When does a Tribe/Consortium have a “material audit exception”? A Tribe...
25 CFR 1000.171 - When should a Tribe/Consortium submit a letter of interest?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false When should a Tribe/Consortium submit a letter of... of Initial Annual Funding Agreements § 1000.171 When should a Tribe/Consortium submit a letter of... BIA by April 1 for fiscal year Tribes/Consortia or May 1 for calendar year Tribes/Consortia. ...
25 CFR 1000.21 - When does a Tribe/Consortium have a “material audit exception”?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false When does a Tribe/Consortium have a âmaterial audit... SELF-DETERMINATION AND EDUCATION ACT Selection of Additional Tribes for Participation in Tribal Self-Governance Eligibility § 1000.21 When does a Tribe/Consortium have a “material audit exception”? A Tribe...
Wisconsin Area Planning and Development. Consortium Project, Title I, Higher Education Act 1965.
ERIC Educational Resources Information Center
Wisconsin Univ., Madison. Univ. Extension.
The Consortium for Area Planning and Development was established in 1967 to implement the basic purposes of Title I of the Higher Education Act of 1965. The Consortium's first seminar was held in May 1968 and was attended by 25 project leaders, local and state government officials, technical consultants, and representatives of various institutions…
Cyber Intelligence Research Consortium (Poster)
2014-10-24
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Cyber Intelligence Research Consortium Poster 5a. CONTRACT NUMBER 5b...nontechnical audiences Environmental Context Provides scope for the analytical effort • Highlights the importance of context - technical and nontechnical... Environmental Context Reporting & Feedback Macroanalysis Microanalysis Data Gathering Steering Committee: Guide Consortium activities and plan for future
ERIC Educational Resources Information Center
Enterprises for New Directions, Inc., Washington, DC.
In accordance with federally mandated evaluation requirements, Enterprises for New Directions (END), Inc., was requested to conduct a summative external evaluation of the 1977-78 Association of Community College Trustees (ACCTion) Consortium. The Consortium consists of 114 two-year colleges nationwide which receive technical assistance through…
24 CFR 943.126 - What is the relationship between HUD and a consortium?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...
24 CFR 943.126 - What is the relationship between HUD and a consortium?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...
24 CFR 943.126 - What is the relationship between HUD and a consortium?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...
24 CFR 943.126 - What is the relationship between HUD and a consortium?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...
24 CFR 943.126 - What is the relationship between HUD and a consortium?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What is the relationship between... § 943.126 What is the relationship between HUD and a consortium? HUD has a direct relationship with the consortium through the PHA Plan process and through one or more payment agreements, executed in a form...
34 CFR 636.5 - What are the matching contribution and planning consortium requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false What are the matching contribution and planning... PROGRAM General § 636.5 What are the matching contribution and planning consortium requirements? (a) The... agreed to by the members of a planning consortium. (Authority: 20 U.S.C. 1136b, 1136e) ...
Measuring Consortium Impact on User Perceptions: OhioLINK and LibQUAL+[TM
ERIC Educational Resources Information Center
Gatten, Jeffrey N.
2004-01-01
What is the impact of an academic library consortium on the perceptions of library services experienced by users of the member institutions' libraries? What is the impact of an academic library consortium on the perceptions of library services experienced by users of the member institutions libraries? In 2002 and 2003, OhioLINK (Ohio's consortium…
ERIC Educational Resources Information Center
Kruemmling, Brooke; Hayes, Heather; Smith, Derrick W.
2017-01-01
The National Leadership Consortium in Sensory Disabilities (NLCSD) trained doctoral scholars at universities across the United States to increase the number and quality of professionals specializing in educating children with sensory disabilities. NLCSD produced 40 new doctorates and created a community of learners comprised of scholars, faculty,…
Code of Federal Regulations, 2011 CFR
2011-10-01
... inter-Tribal consortium or Tribal organization? 137.235 Section 137.235 Public Health PUBLIC HEALTH... SERVICES TRIBAL SELF-GOVERNANCE Withdrawal § 137.235 May an Indian Tribe withdraw from a participating inter-Tribal consortium or Tribal organization? Yes, an Indian Tribe may fully or partially withdraw...
Code of Federal Regulations, 2010 CFR
2010-10-01
... inter-Tribal consortium or Tribal organization? 137.235 Section 137.235 Public Health PUBLIC HEALTH... SERVICES TRIBAL SELF-GOVERNANCE Withdrawal § 137.235 May an Indian Tribe withdraw from a participating inter-Tribal consortium or Tribal organization? Yes, an Indian Tribe may fully or partially withdraw...
Code of Federal Regulations, 2014 CFR
2014-10-01
... inter-Tribal consortium or Tribal organization? 137.235 Section 137.235 Public Health PUBLIC HEALTH... SERVICES TRIBAL SELF-GOVERNANCE Withdrawal § 137.235 May an Indian Tribe withdraw from a participating inter-Tribal consortium or Tribal organization? Yes, an Indian Tribe may fully or partially withdraw...
32 CFR 37.515 - Must I do anything additional to determine the qualification of a consortium?
Code of Federal Regulations, 2014 CFR
2014-07-01
... SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 37.515 Must I do anything additional to determine the qualification of a consortium? (a) When the prospective recipient of a TIA is a consortium that is not formally incorporated...
32 CFR 37.515 - Must I do anything additional to determine the qualification of a consortium?
Code of Federal Regulations, 2012 CFR
2012-07-01
... SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 37.515 Must I do anything additional to determine the qualification of a consortium? (a) When the prospective recipient of a TIA is a consortium that is not formally incorporated...
32 CFR 37.515 - Must I do anything additional to determine the qualification of a consortium?
Code of Federal Regulations, 2013 CFR
2013-07-01
... SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 37.515 Must I do anything additional to determine the qualification of a consortium? (a) When the prospective recipient of a TIA is a consortium that is not formally incorporated...
A Long Island Consortium Takes Shape. Occasional Paper No. 76-1.
ERIC Educational Resources Information Center
Taylor, William R.
This occasional paper, the first in a "new" series, describes the background, activities, and experiences of the Long Island Consortium, a cooperative effort of two-year and four-year colleges committed to organizing a model program of faculty development. The consortium was organized under an initial grant from the Lilly Endowment. In May and…
Baltimore Education Research Consortium: A Consideration of Past, Present, and Future
ERIC Educational Resources Information Center
Connolly, Faith; Plank, Stephen; Rone, Tracy
2012-01-01
In this paper, we offer an overview of the history and development of the Baltimore Education Research Consortium (BERC). As a part of this overview, we describe challenges and dilemmas encountered during the founding years of this consortium. We also highlight particular benefits or sources of satisfaction we have realized in the course of…
Code of Federal Regulations, 2010 CFR
2010-04-01
... Financial Assistance for Planning and Negotiation Grants for Non-BIA Programs Eligibility and Application... Tribe/Consortium in drafting its planning grant application? 1000.68 Section 1000.68 Indians OFFICE OF... planning grant application? Yes, upon request from the Tribe/Consortium, a non-BIA bureau may provide...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
... Production Act of 1993--IMS Global Learning Consortium, Inc. Correction In notice document 2011-78 appearing... sixth lines, ``INS Global Learning Consortium, Inc.'' should read ``IMS Global Learning Consortium, Inc.''. 3. On the same page, in the third column, in the 15th and 16th lines, ``INS Global Learning...
Third Progress and Information Report of the Vocational-Technical Education Consortium of States.
ERIC Educational Resources Information Center
Lee, Connie W.; And Others
This description of major activities and accomplishments of the Vocational-Technical Education Consortium of the States (V-TECS) since the second progress report of May, 1975, is designed to provide the reader with a basic understanding of the processes and procedures used by the consortium in achieving its major goal: The production of catalogs…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... Explore Feasibility of Establishing a NIST/Industry Consortium on Neutron Measurements for Soft Materials.... SUMMARY: The National Institute of Standards and Technology (NIST) invites interested parties to attend a pre-consortium meeting on June 2-3, 2011 to be held on the NIST campus. The goal of the one-day...
ERIC Educational Resources Information Center
Pacific Resources for Education and Learning, Honolulu, HI.
The Pacific Eisenhower Mathematics and Science Regional Consortium was established at Pacific Resources for Education and Learning (PREL) in October, 1992 and completed its second funding cycle in February 2001. The Consortium is a collaboration among PREL, the Curriculum Research and Development Group (CRDG) at the University of Hawaii, and the…
Code of Federal Regulations, 2010 CFR
2010-04-01
... Evaluations § 1000.367 Will the Department evaluate a Tribe's/Consortium's performance of non-trust related... 25 Indians 2 2010-04-01 2010-04-01 false Will the Department evaluate a Tribe's/Consortium's performance of non-trust related programs? 1000.367 Section 1000.367 Indians OFFICE OF THE ASSISTANT SECRETARY...
ERIC Educational Resources Information Center
Sperazi, Laura; DePascale, Charles A.
The Massachusetts Workplace Literacy Consortium sought to upgrade work-related literacy skills at 22 partner sites in the state. Members included manufacturers, health care organizations, educational institutions, and labor unions. In its third year, the consortium served 1,179 workers with classes in English for speakers of other languages, adult…
Shahzad, Asim; Saddiqui, Samina; Bano, Asghari
2016-01-01
The objective of this study was to evaluate the role of PGPR consortium and fertilizer alone and in combination on the physiology of maize grown under oily sludge stress environment as well on the soil nutrient status. Consortium was prepared from Bacillus cereus (Acc KR232400), Bacillus altitudinis (Acc KF859970), Comamonas (Delftia) belonging to family Comamonadacea (Acc KF859971) and Stenotrophomonasmaltophilia (Acc KF859973). The experiment was conducted in pots with complete randomized design with four replicates and kept in field. Oily sludge was mixed in ml and Ammonium nitrate and Diammonium phosphate (DAP) were added at 70 ug/g and 7 ug/g at sowing. The plant was harvested at 21 d for estimation of protein, proline and antioxidant enzymes superoxide dismutase (SOD) and peroxidase (POD). To study the degradation, total petroleum hydrocarbon was extracted by soxhelt extraction and extract was analyzed by GC-FID at different period after incubation. Combined application of consortium and fertilizer enhanced the germination %, protein and, proline content by 90,130 and 99% higher than untreated maize plants. Bioavailability of macro and micro nutrient was also enhanced with consortium and fertilizer in oily sludge. The consortium and fertilizer in combined treatment decreased the superoxide dismutase (SOD), peroxidase dismutase (POD) of the maize leaves grown in oily sludge. Degradation of total petroleum hydrocarbon (TPHs) was 59% higher in combined application of consortium and fertilizer than untreated maize at 3 d. The bacterial consortium can enhanced the maize tolerance to oily sludge and enhanced degradation of total petroleum hydrocarbon (TPHs). The maize can be considered as tolerant plant species to remediate oily sludge contaminated soils.
Eremenco, Sonya; Pease, Sheryl; Mann, Sarah; Berry, Pamela
2017-01-01
This paper describes the rationale and goals of the Patient-Reported Outcome (PRO) Consortium's instrument translation process. The PRO Consortium has developed a number of novel PRO measures which are in the process of qualification by the U.S. Food and Drug Administration (FDA) for use in clinical trials where endpoints based on these measures would support product labeling claims. Given the importance of FDA qualification of these measures, the PRO Consortium's Process Subcommittee determined that a detailed linguistic validation (LV) process was necessary to ensure that all translations of Consortium-developed PRO measures are performed using a standardized approach with the rigor required to meet regulatory and pharmaceutical industry expectations, as well as having a clearly defined instrument translation process that the translation industry can support. The consensus process involved gathering information about current best practices from 13 translation companies with expertise in LV, consolidating the findings to generate a proposed process, and obtaining iterative feedback from the translation companies and PRO Consortium member firms on the proposed process in two rounds of review in order to update existing principles of good practice in LV and to provide sufficient detail for the translation process to ensure consistency across PRO Consortium measures, sponsors, and translation companies. The consensus development resulted in a 12-step process that outlines universal and country-specific new translation approaches, as well as country-specific adaptations of existing translations. The PRO Consortium translation process will play an important role in maintaining the validity of the data generated through these measures by ensuring that they are translated by qualified linguists following a standardized and rigorous process that reflects best practice.
Bhutiani, Neal; Scoggins, Charles R; McMasters, Kelly M; Ethun, Cecilia G; Poultsides, George A; Pawlik, Timothy M; Weber, Sharon M; Schmidt, Carl R; Fields, Ryan C; Idrees, Kamran; Hatzaras, Ioannis; Shen, Perry; Maithel, Shishir K; Martin, Robert C G
2018-04-01
The objective of this study was to determine the impact of caudate resection on margin status and outcomes during resection of extrahepatic hilar cholangiocarcinoma. A database of 1,092 patients treated for biliary malignancies at institutions of the Extrahepatic Biliary Malignancy Consortium was queried for individuals undergoing curative-intent resection for extrahepatic hilar cholangiocarcinoma. Patients who did versus did not undergo concomitant caudate resection were compared with regard to demographic, baseline, and tumor characteristics as well as perioperative outcomes. A total of 241 patients underwent resection for a hilar cholangiocarcinoma, of whom 85 underwent caudate resection. Patients undergoing caudate resection were less likely to have a final positive margin (P = .01). Kaplan-Meier curve of overall survival for patients undergoing caudate resection indicated no improvement over patients not undergoing caudate resection (P = .16). On multivariable analysis, caudate resection was not associated with improved overall survival or recurrence-free survival, although lymph node positivity was associated with worse overall survival and recurrence-free survival, and adjuvant chemoradiotherapy was associated with improved overall survival and recurrence-free survival. Caudate resection is associated with a greater likelihood of margin-negative resection in patients with extrahepatic hilar cholangiocarcinoma. Precise preoperative imaging is critical to assess the extent of biliary involvement, so that all degrees of hepatic resections are possible at the time of the initial operation. Copyright © 2017 Elsevier Inc. All rights reserved.
Cancer Core Europe: a consortium to address the cancer care-cancer research continuum challenge.
Eggermont, Alexander M M; Caldas, Carlos; Ringborg, Ulrik; Medema, René; Tabernero, Josep; Wiestler, Otmar
2014-11-01
European cancer research for a transformative initiative by creating a consortium of six leading excellent comprehensive cancer centres that will work together to address the cancer care-cancer research continuum. Prerequisites for joint translational and clinical research programs are very demanding. These require the creation of a virtual single 'e-hospital' and a powerful translational platform, inter-compatible clinical molecular profiling laboratories with a robust underlying computational biology pipeline, standardised functional and molecular imaging, commonly agreed Standard Operating Procedures (SOPs) for liquid and tissue biopsy procurement, storage and processing, for molecular diagnostics, 'omics', functional genetics, immune-monitoring and other assessments. Importantly also it requires a culture of data collection and data storage that provides complete longitudinal data sets to allow for: effective data sharing and common database building, and to achieve a level of completeness of data that is required for conducting outcome research, taking into account our current understanding of cancers as communities of evolving clones. Cutting edge basic research and technology development serve as an important driving force for innovative translational and clinical studies. Given the excellent track records of the six participants in these areas, Cancer Core Europe will be able to support the full spectrum of research required to address the cancer research- cancer care continuum. Cancer Core Europe also constitutes a unique environment to train the next generation of talents in innovative translational and clinical oncology. Copyright © 2014. Published by Elsevier Ltd.
Cohort profile: the Finnish Genetics of Pre-eclampsia Consortium (FINNPEC).
Jääskeläinen, Tiina; Heinonen, Seppo; Kajantie, Eero; Kere, Juha; Kivinen, Katja; Pouta, Anneli; Laivuori, Hannele
2016-11-10
The Finnish Genetics of Pre-eclampsia Consortium (FINNPEC) Study was established to set up a nationwide clinical and DNA database on women with and without pre-eclampsia (PE), including their partners and infants, in order to identify genetic risk factors for PE. FINNPEC is a cross-sectional case-control cohort collected from 5 university hospitals in Finland during 2008-2011. A total of 1450 patients with PE and 1065 pregnant control women without PE (aged 18-47 years) were recruited. Altogether, there were 1377 full triads (625 PE and 752 control triads). The established cohort holds both clinical and genetic information of mother-infant-father triads representing a valuable resource for studying the pathogenesis of the disease. Furthermore, maternal biological samples (first and third trimester serum and placenta) will provide additional information for PE research. Until now, research has encompassed studies on candidate genes, Sanger and next-generation sequencing, and various studies on the placenta. FINNPEC has also participated in the InterPregGen study, which is the largest investigation on maternal and fetal genetic factors underlying PE until now. Ongoing studies focus on elucidating the role of immunogenetic and metabolic factors in PE. Data on morbidity and mortality will be collected from mothers and fathers through links to the nationwide health registers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Mohana, Sarayu; Shrivastava, Shalini; Divecha, Jyoti; Madamwar, Datta
2008-02-01
Decolorization and degradation of polyazo dye Direct Black 22 was carried out by distillery spent wash degrading mixed bacterial consortium, DMC. Response surface methodology (RSM) involving a central composite design (CCD) in four factors was successfully employed for the study and optimization of decolorization process. The hyper activities and interactions between glucose concentration, yeast extract concentration, dye concentration and inoculum size on dye decolorization were investigated and modeled. Under optimized conditions the bacterial consortium was able to decolorize the dye almost completely (>91%) within 12h. Bacterial consortium was able to decolorize 10 different azo dyes. The optimum combination of the four variables predicted through RSM was confirmed through confirmatory experiments and hence this bacterial consortium holds potential for the treatment of industrial waste water. Dye degradation products obtained during the course of decolorization were analyzed by HPTLC.
From Start-up to Sustainability: A Decade of Collaboration to Shape the Future of Nursing.
Gubrud, Paula; Spencer, Angela G; Wagner, Linda
This article describes progress the Oregon Consortium for Nursing Education has made toward addressing the academic progression goals provided by the 2011 Institute of Medicine's Future of Nursing: Leading Change, Advancing Health report. The history of the consortium's development is described, emphasizing the creation of an efficient and sustainable organization infrastructure that supports a shared curriculum provided through a community college/university partnership. Data and analysis describing progress and challenges related to supporting a shared curriculum and increasing access and affordability for nursing education across the state are presented. We identified four crucial attributes of maintaining collaborative community that have been cultivated to assure the consortium continues to make progress toward reaching the Institute of Medicine's Future of Nursing goals. Oregon Consortium for Nursing Education provides important lessons learned for other statewide consortiums to consider when developing plans for sustainability.
Formulation of bacterial consortium as whole cell biocatalyst for degradation of oil compounds
NASA Astrophysics Data System (ADS)
Yetti, Elvi; A'la, Amalia; Luthfiyah, Nailul; Wijaya, Hans; Thontowi, Ahmad; Yopi
2017-11-01
In this research, weaim to investigateformulation of bacterial consortium as whole cell biocatalyst for degradation of oil compounds. We constructed microbial consortium from 4 (four) selected marine oil bacteria to become 15 (twelve) combination culture. Those bacteria were from collection of Laboratory of Biocatalyst and Fermentation, Research Center for Biotechnology, Indonesian Institutes of Sciences and designated as Labrenzia sp. MBTDCMFRIMab26, Labrenzia aggregata strasin HQB397, Novosphingobium pentaromativorans strain PQ-3 16S, and Novosphingobium pentaromativorans strain US6-1. The mixture or bacteria consortia, denoted as F1, F2, …F15 consisted of 1, 2, 3 and 4 bacterial strains, respectively. The strains were selected based on the criteria that they were able to display good growth in crude oil containing media. Five bacterialformulationsshowed good potentialas candidates for microbial consortium. We will optimize these consortium with carrier matrix choosed from biomass materials and also carry out oil content analysis.
The bioleaching potential of a bacterial consortium.
Latorre, Mauricio; Cortés, María Paz; Travisany, Dante; Di Genova, Alex; Budinich, Marko; Reyes-Jara, Angélica; Hödar, Christian; González, Mauricio; Parada, Pilar; Bobadilla-Fazzini, Roberto A; Cambiazo, Verónica; Maass, Alejandro
2016-10-01
This work presents the molecular foundation of a consortium of five efficient bacteria strains isolated from copper mines currently used in state of the art industrial-scale biotechnology. The strains Acidithiobacillus thiooxidans Licanantay, Acidiphilium multivorum Yenapatur, Leptospirillum ferriphilum Pañiwe, Acidithiobacillus ferrooxidans Wenelen and Sulfobacillus thermosulfidooxidans Cutipay were selected for genome sequencing based on metal tolerance, oxidation activity and bioleaching of copper efficiency. An integrated model of metabolic pathways representing the bioleaching capability of this consortium was generated. Results revealed that greater efficiency in copper recovery may be explained by the higher functional potential of L. ferriphilum Pañiwe and At. thiooxidans Licanantay to oxidize iron and reduced inorganic sulfur compounds. The consortium had a greater capacity to resist copper, arsenic and chloride ion compared to previously described biomining strains. Specialization and particular components in these bacteria provided the consortium a greater ability to bioleach copper sulfide ores. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pérez-Valderrama, B; Arranz Arija, J A; Rodríguez Sánchez, A; Pinto Marín, A; Borrega García, P; Castellano Gaunas, D E; Rubio Romero, G; Maximiano Alonso, C; Villa Guzmán, J C; Puertas Álvarez, J L; Chirivella González, I; Méndez Vidal, M J; Juan Fita, M J; León-Mateos, L; Lázaro Quintela, M; García Domínguez, R; Jurado García, J M; Vélez de Mendizábal, E; Lambea Sorrosal, J J; García Carbonero, I; González del Alba, A; Suárez Rodríguez, C; Jiménez Gallego, P; Meana García, J A; García Marrero, R D; Gajate Borau, P; Santander Lobera, C; Molins Palau, C; López Brea, M; Fernández Parra, E M; Reig Torras, O; Basterretxea Badiola, L; Vázquez Estévez, S; González Larriba, J L
2016-04-01
Patients with metastatic renal carcinoma (mRCC) treated with first-line pazopanib were not included in the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) prognostic model. SPAZO (NCT02282579) was a nation-wide retrospective observational study designed to assess the effectiveness and validate the IMDC prognostic model in patients treated with first-line pazopanib in clinical practice. Data of 278 patients, treated with first-line pazopanib for mRCC in 34 centres in Spain, were locally recorded and externally validated. Mean age was 66 years, there were 68.3% male, 93.5% clear-cell type, 74.8% nephrectomized, and 81.3% had ECOG 0-1. Metastatic sites were: lung 70.9%, lymph node 43.9%, bone 26.3%, soft tissue/skin 20.1%, liver 15.1%, CNS 7.2%, adrenal gland 6.5%, pleura/peritoneum 5.8%, pancreas 5%, and kidney 2.2%. After median follow-up of 23 months, 76.4% had discontinued pazopanib (57.2% due to progression), 47.9% had received second-line targeted therapy, and 48.9% had died. According to IMDC prognostic model, 19.4% had favourable risk (FR), 57.2% intermediate risk (IR), and 23.4% poor risk (PR). No unexpected toxicities were recorded. Response rate was 30.3% (FR: 44%, IR: 30% PR: 17.3%). Median progression-free survival (whole population) was 11 months (32 in FR, 11 in IR, 4 in PR). Median and 2-year overall survival (whole population) were 22 months and 48.1%, respectively (FR: not reached and 81.6%, IR: 22 and 48.7%, PR: 7 and 18.8%). These estimations and their 95% confidence intervals are fully consistent with the outcomes predicted by the IMDC prognostic model. Our results validate the IMDC model for first-line pazopanib in mRCC and confirm the effectiveness and safety of this treatment. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Wells, J Connor; Stukalin, Igor; Norton, Craig; Srinivas, Sandy; Lee, Jae Lyun; Donskov, Frede; Bjarnason, Georg A; Yamamoto, Haru; Beuselinck, Benoit; Rini, Brian I; Knox, Jennifer J; Agarwal, Neeraj; Ernst, D Scott; Pal, Sumanta K; Wood, Lori A; Bamias, Aristotelis; Alva, Ajjai S; Kanesvaran, Ravindran; Choueiri, Toni K; Heng, Daniel Y C
2017-02-01
The use of third-line targeted therapy (TTT) in metastatic renal cell carcinoma (mRCC) is not well characterized and varies due to the lack of robust data to guide treatment decisions. This study examined the use of third-line therapy in a large international population. To evaluate the use and efficacy of targeted therapy in a third-line setting. Twenty-five international cancer centers provided consecutive data on 4824 mRCC patients who were treated with an approved targeted therapy. One thousand and twelve patients (21%) received TTT and were included in the analysis. Patients were analyzed for overall survival (OS) and progression-free survival using Kaplan-Meier curves, and were evaluated for overall response. Cox regression analyses were used to determine the statistical association between OS and the six factors included in the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) prognostic model. Subgroup analysis was performed on patients stratified by their IMDC prognostic risk status. Everolimus was the most prevalent third-line therapy (27.5%), but sunitinib, sorafenib, pazopanib, temsirolimus, and axitinib were all utilized in over ≥9% of patients. Patients receiving any TTT had an OS of 12.4 mo, a progression-free survival of 3.9 mo, and 61.1% of patients experienced an overall response of stable disease or better. Patients not receiving TTT had an OS of 2.1 mo. Patients with favorable- (7.2%) or intermediate-risk (65.3%) disease had the highest OS with TTT, 29.9 mo and 15.5 mo, respectively, while poor-risk (27.5%) patients survived 5.5 mo. Results are limited by the retrospective nature of the study. TTT remains highly heterogeneous. The IMDC prognostic criteria can be used to stratify third-line patients. TTT use in favorable- and intermediate-risk patients was associated with the greatest OS. Patients with favorable- and intermediate-prognostic criteria disease treated with third-line targeted therapy have an associated longer overall survival compared with those with poor risk disease. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Udo, Renate; Tcherny-Lessenot, Stéphanie; Brauer, Ruth; Dolin, Paul; Irvine, David; Wang, Yunxun; Auclert, Laurent; Juhaeri, Juhaeri; Kurz, Xavier; Abenhaim, Lucien; Grimaldi, Lamiae; De Bruin, Marie L
2016-03-01
To examine the robustness of findings of case-control studies on the association between acute liver injury (ALI) and antibiotic use in the following different situations: (i) Replication of a protocol in different databases, with different data types, as well as replication in the same database, but performed by a different research team. (ii) Varying algorithms to identify cases, with and without manual case validation. (iii) Different exposure windows for time at risk. Five case-control studies in four different databases were performed with a common study protocol as starting point to harmonize study outcome definitions, exposure definitions and statistical analyses. All five studies showed an increased risk of ALI associated with antibiotic use ranging from OR 2.6 (95% CI 1.3-5.4) to 7.7 (95% CI 2.0-29.3). Comparable trends could be observed in the five studies: (i) without manual validation the use of the narrowest definition for ALI showed higher risk estimates, (ii) narrow and broad algorithm definitions followed by manual validation of cases resulted in similar risk estimates, and (iii) the use of a larger window (30 days vs 14 days) to define time at risk led to a decrease in risk estimates. Reproduction of a study using a predefined protocol in different database settings is feasible, although assumptions had to be made and amendments in the protocol were inevitable. Despite differences, the strength of association was comparable between the studies. In addition, the impact of varying outcome definitions and time windows showed similar trends within the data sources. Copyright © 2015 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What happens if the Tribe/Consortium and bureau negotiators fail to reach an agreement? 1000.179 Section 1000.179 Indians OFFICE OF THE ASSISTANT SECRETARY... and bureau negotiators fail to reach an agreement? (a) If the Tribe/Consortium and bureau...
The NCI has awarded eight grants to create the Consortium for Molecular Characterization of Screen-Detected Lesions. The consortium has seven molecular characterization laboratories (MCLs) and a coordinating center, and is supported by the Division of Cancer Prevention and the Division of Cancer Biology. | 7 laboratories and a coordinating center focused on identifying
25 CFR 1000.23 - How is a Tribe/Consortium admitted to the applicant pool?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How is a Tribe/Consortium admitted to the applicant pool...-DETERMINATION AND EDUCATION ACT Selection of Additional Tribes for Participation in Tribal Self-Governance Admission into the Applicant Pool § 1000.23 How is a Tribe/Consortium admitted to the applicant pool? To be...
25 CFR 1000.23 - How is a Tribe/Consortium admitted to the applicant pool?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How is a Tribe/Consortium admitted to the applicant pool...-DETERMINATION AND EDUCATION ACT Selection of Additional Tribes for Participation in Tribal Self-Governance Admission into the Applicant Pool § 1000.23 How is a Tribe/Consortium admitted to the applicant pool? To be...
Code of Federal Regulations, 2010 CFR
2010-07-01
... sign on behalf of the other participants and are binding on all consortium members with respect to the... signed by a single member on behalf of a consortium that is not a legal entity. For example, you should... sign on all members' behalf. Reporting Information About the Award ...
Code of Federal Regulations, 2011 CFR
2011-07-01
... sign on behalf of the other participants and are binding on all consortium members with respect to the... signed by a single member on behalf of a consortium that is not a legal entity. For example, you should... sign on all members' behalf. Reporting Information About the Award ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-04
... Production Act of 1993--3D PDF Consortium, Inc. Notice is hereby given that, on November 8, 2012, pursuant to.... (``the Act''), 3D Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with the... remains open, and 3D PDF intends to file additional written notifications disclosing all changes in...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-14
... Production Act of 1993--3d PDF Consortium, Inc. Notice is hereby given that, on August 20, 2012, pursuant to.... (``the Act''), 3D Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with the... remains open, and 3D PDF intends to file additional written notifications disclosing all changes in...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... Production Act of 1993--3D PDF Consortium, Inc. Notice is hereby given that, on April 19, 2013, pursuant to.... (``the Act''), 3D PDF Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with... project remains open, and 3D PDF intends to file additional written notifications disclosing all changes...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-03
... Production Act of 1993--3D PDF Consortium, Inc. Notice is hereby given that, on October 31, 2013, pursuant to.... (``the Act''), 3D PDF Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with... remains open, and 3D PDF intends to file additional written notifications disclosing all changes in...
AFT-QuEST Consortium Yearbook. Proceedings of the AFT-QuEST Consortium (April 22-26, 1973).
ERIC Educational Resources Information Center
American Federation of Teachers, Washington, DC.
This document is a report on the proceedings of the 1973 American Federation of Teachers-Quality Educational Standards in Teaching (AFT-QuEST) consortium sponsored by the AFT. Included in this document are the texts of speeches and outlines of workshops and iscussions. The document is divided into the following sections: goals, major proposals,…
ERIC Educational Resources Information Center
Shaw, Kate
2016-01-01
The Philadelphia Education Research Consortium (PERC) was launched in July 2014 as an innovative place-based consortium of educational research partners from multiple sectors. Its primary objective is to provide research and analyses on some of the city's most pressing education issues. As such, PERC's research agenda is driven by both traditional…
ERIC Educational Resources Information Center
Beers, C. David; Ott, Richard W.
The Child Development Training Consortium, a Beacon College Project directed by San Juan College (SJC) is a collaborative effort of colleges and universities in New Mexico and Arizona. The consortium's major objective is to create child development training materials for community college faculty who teach "at-risk" Native American and…
Activities of the Alabama Consortium on forestry education and research, 1993-1999
John Schelhas
2002-01-01
The Alabama Consortium on Forestry Education and Research was established in 1992 to promote communication and collaboration among diverse institutions involved in forestry in the State of Alabama. It was organized to advance forestry education and research in ways that could not be accomplished by individual members alone. This report tells the story of the consortium...
ERIC Educational Resources Information Center
Bridge, Freda; Fisher, Roy; Webb, Keith
2003-01-01
The Consortium for Post-Compulsory Education and Training (CPCET) is a single subject consortium of further education and higher education providers of professional development relating to in-service teacher training for the whole of the post-compulsory sector. Involving more than 30 partners spread across the North of England, CPCET evolved from…
Prostate Cancer Clinical Consortium Clinical Research Site:Targeted Therapies
2015-10-01
AWARD NUMBER: W81XWH-14-2-0159 TITLE: Prostate Cancer Clinical Consortium Clinical Research Site: Targeted Therapies PRINCIPAL INVESTIGATOR...Sep 2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Prostate Cancer Clinical Consortium Clinical Research Site: Targeted Therapies 5b. GRANT NUMBER... therapy resistance/sensitivity, identification of new therapeutic targets through high quality genomic analyses, providing access to the highest quality
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What happens to a Tribe's/Consortium's mature contract...-DETERMINATION AND EDUCATION ACT Retrocession § 1000.338 What happens to a Tribe's/Consortium's mature contract...? Retrocession has no effect on mature contract status, provided that the 3 most recent audits covering...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Programs Establishing Self-Governance Base Budgets § 1000.107 Must a Tribe/Consortium with a base budget or... residual amounts? No, if a Tribe/Consortium negotiated amounts before January 16, 2001, it does not need to.... (c) Self-governance Tribes/Consortia are eligible for funding amounts for new or available programs...
Code of Federal Regulations, 2010 CFR
2010-04-01
...-governance activities for a member Tribe, that planning activity and report may be used to satisfy the planning requirements for the member Tribe if it applies for self-governance status on its own. (b) Submit... for Participation in Tribal Self-Governance Eligibility § 1000.18 May a Consortium member Tribe...
Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9
1989-10-01
supports the Northeast Aritificial Intelligence Consortium (NAIC). Volume 9 Parallel Vision Report submitted by Christopher M. Brown Randal C. Nelson...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT - 1988 Parallel Vision Syracuse University Christopher M. Brown and Randal C. Nelson...Technical Director Directorate of Intelligence & Reconnaissance FOR THE COMMANDER: IGOR G. PLONISCH Directorate of Plans & Programs If your address has
Genesis of an oak-fire science consortium
Grabner, K.W.; Stambaugh, M. C.; Guyette, R.P.; Dey, D. C.; Willson, G.D.; Dey, D. C.; Stambaugh, M. C.; Clark, S.L.; Schweitzer, C. J.
2012-01-01
With respect to fire management and practices, one of the most overlooked regions lies in the middle of the country. In this region there is a critical need for both recognition of fire’s importance and sharing of fire information and expertise. Recently we proposed and were awarded funding by the Joint Fire Science Program to initiate the planning phase for a regional fire consortium. The purpose of the consortium will be to promote the dissemination of fire information across the interior United States and to identify fire information needs of oak-dominated communities such as woodlands, forests, savannas, and barrens. Geographically, the consortium region will cover: 1) the Interior Lowland Plateau Ecoregion in Illinois, Indiana, central Kentucky and Tennessee; 2) the Missouri, Arkansas, and Oklahoma Ozarks; 3) the Ouachita Mountains of Arkansas and Oklahoma; and 4) the Cross Timbers Region in Texas, Oklahoma, and Kansas. This region coincides with the southwestern half of the Central Hardwoods Forest Region. The tasks of this consortium will be to disseminate fire information, connect fire professionals, and efficiently address fire issues within our region. If supported, the success and the future direction of the consortium will be driven by end-users, their input, and involvement.
Cordova-Rosa, S M; Dams, R I; Cordova-Rosa, E V; Radetski, M R; Corrêa, A X R; Radetski, C M
2009-05-15
Time-course performance of a phenol-degrading indigenous bacterial consortium, and of Acinetobacter calcoaceticus var. anitratus, isolated from an industrial coal wastewater treatment plant was evaluated. This bacterial consortium was able to survive in the presence of phenol concentrations as high as 1200mgL(-1) and the consortium was more fast in degrading phenol than a pure culture of the A. calcoaceticus strain. In a batch system, 86% of phenol biodegradation occurred in around 30h at pH 6.0, while at pH 3.0, 95.2% of phenol biodegradation occurred in 8h. A high phenol biodegradation (above 95%) by the mixed culture in a bioreactor was obtained in both continuous and batch systems, but when test was carried out in coke gasification wastewater, no biodegradation was observed after 10 days at pH 9-11 for both pure strain or the isolated consortium. An activated sludge with the same bacterial consortium characterized above was mixed with a textile sludge-contaminated soil with a phenol concentration of 19.48mgkg(-1). After 20 days of bioaugmentation, the remanescent phenol concentration of the sludge-soil matrix was 1.13mgkg(-1).
Bacterial community composition characterization of a lead-contaminated Microcoleus sp. consortium.
Giloteaux, Ludovic; Solé, Antoni; Esteve, Isabel; Duran, Robert
2011-08-01
A Microcoleus sp. consortium, obtained from the Ebro delta microbial mat, was maintained under different conditions including uncontaminated, lead-contaminated, and acidic conditions. Terminal restriction fragment length polymorphism and 16S rRNA gene library analyses were performed in order to determine the effect of lead and culture conditions on the Microcoleus sp. consortium. The bacterial composition inside the consortium revealed low diversity and the presence of specific terminal-restriction fragments under lead conditions. 16S rRNA gene library analyses showed that members of the consortium were affiliated to the Alpha, Beta, and Gammaproteobacteria and Cyanobacteria. Sequences closely related to Achromobacter spp., Alcaligenes faecalis, and Thiobacillus species were exclusively found under lead conditions while sequences related to Geitlerinema sp., a cyanobacterium belonging to the Oscillatoriales, were not found in presence of lead. This result showed a strong lead selection of the bacterial members present in the Microcoleus sp. consortium. Several of the 16S rRNA sequences were affiliated to nitrogen-fixing microorganisms including members of the Rhizobiaceae and the Sphingomonadaceae. Additionally, confocal laser scanning microscopy and scanning and transmission electron microscopy showed that under lead-contaminated condition Microcoleus sp. cells were grouped and the number of electrodense intracytoplasmic inclusions was increased.
Pugazhendi, Arulazhagan; Abbad Wazin, Hadeel; Qari, Huda; Basahi, Jalal Mohammad Al-Badry; Godon, Jean Jacques; Dhavamani, Jeyakumar
2017-10-01
Clean-up of contaminated wastewater remains to be a major challenge in petroleum refinery. Here, we describe the capacity of a bacterial consortium enriched from crude oil drilling site in Al-Khobar, Saudi Arabia, to utilize polycyclic aromatic hydrocarbons (PAHs) as sole carbon source at 60°C. The consortium reduced low molecular weight (LMW; naphthalene, phenanthrene, fluorene and anthracene) and high molecular weight (HMW; pyrene, benzo(e)pyrene and benzo(k)fluoranthene) PAH loads of up to 1.5 g/L with removal efficiencies of 90% and 80% within 10 days. PAH biodegradation was verified by the presence of PAH metabolites and evolution of carbon dioxide (90 ± 3%). Biodegradation led to a reduction of the surface tension to 34 ± 1 mN/m thus suggesting biosurfactant production by the consortium. Phylogenetic analysis of the consortium revealed the presence of the thermophilic PAH degrader Pseudomonas aeruginosa strain CEES1 (KU664514) and Bacillus thermosaudia (KU664515) strain CEES2. The consortium was further found to treat petroleum wastewater in continuous stirred tank reactor with 96 ± 2% chemical oxygen demand removal and complete PAH degradation in 24 days.
Patel, Vilas; Jain, Siddharth; Madamwar, Datta
2012-03-01
Naphthalene degrading bacterial consortium (DV-AL) was developed by enrichment culture technique from sediment collected from the Alang-Sosiya ship breaking yard, Gujarat, India. The 16S rRNA gene based molecular analyzes revealed that the bacterial consortium (DV-AL) consisted of four strains namely, Achromobacter sp. BAB239, Pseudomonas sp. DV-AL2, Enterobacter sp. BAB240 and Pseudomonas sp. BAB241. Consortium DV-AL was able to degrade 1000 ppm of naphthalene in Bushnell Haas medium (BHM) containing peptone (0.1%) as co-substrate with an initial pH of 8.0 at 37°C under shaking conditions (150 rpm) within 24h. Maximum growth rate and naphthalene degradation rate were found to be 0.0389 h(-1) and 80 mg h(-1), respectively. Consortium DV-AL was able to utilize other aromatic and aliphatic hydrocarbons such as benzene, phenol, carbazole, petroleum oil, diesel fuel, and phenanthrene and 2-methyl naphthalene as sole carbon source. Consortium DV-AL was also efficient to degrade naphthalene in the presence of other pollutants such as petroleum hydrocarbons and heavy metals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mathur, Ankita; Kumari, Jyoti; Parashar, Abhinav; T., Lavanya; Chandrasekaran, N.; Mukherjee, Amitava
2015-01-01
This study is aimed to explore the toxicity of TiO2 nanoparticles at low concentrations (0.25, 0.50 & 1.00 μg/ml); on five bacterial isolates and their consortium in waste water medium both in dark and UVA conditions. To critically examine the toxic effects of nanoparticles and the response mechanism(s) offered by microbes, several aspects were monitored viz. cell viability, ROS generation, SOD activity, membrane permeability, EPS release and biofilm formation. A dose and time dependent loss in viability was observed for treated isolates and the consortium. At the highest dose, after 24h, oxidative stress was examined which conclusively showed more ROS generation & cell permeability and less SOD activity in single isolates as compared to the consortium. As a defense mechanism, EPS release was enhanced in case of the consortium against the single isolates, and was observed to be dose dependent. Similar results were noticed for biofilm formation, which substantially increased at highest dose of nanoparticle exposure. Concluding, the consortium showed more resistance against the toxic effects of the TiO2 nanoparticles compared to the individual isolates. PMID:26496250
Alcántara, S; Velasco, A; Revah, S
2004-10-01
The elemental sulfur formation by the partial oxidation of thiosulfate by both a sulfoxidizing consortium and by Thiobacillus thioparus ATCC 23645 was studied under aerobic conditions in chemostat. Steady state was attained with essentially total conversion to sulfate when the dissolved oxygen concentration was 5 mgO2 l(-1) and below a dilution rate (D) of 3.0 d(-1)for the consortium and 0.9 d(-1) for T thioparus. The consortium formed elemental sulfur in steady state under oxygen limitation. Fifty percent of the theoretical elemental sulfur yield was obtained with a dissolved oxygen concentration of 0.2 mgO2 l(-1). Growth of T thioparus was negatively affected with a concentration below 1.9 mgO2 l(-1). Consortium yield from batch cultures was 2.1 g(-1) (protein) mol(-1) (thiosulfate), which was comparable with the values obtained in the chemostat at dilution rates of 0.4 d(-1) and 1.2 d(-1). The consortium showed a maximum degradation rate of 0.105 g(thiosulfate) g(-1) (protein) min(-1) and a saturation rate for S2O3(2-) of 1.9 mM.
Multi-national, multi-lingual, multi-professional CATs: (Curriculum Analysis Tools).
Eisner, J
1995-01-01
A consortium of dental schools and allied dental programs was established in 1991 with the expressed purpose of creating a curriculum database program that was end-user modifiable [1]. In April of 1994, a beta version (Beta 2.5 written in FoxPro(TM) 2.5) of the software CATs, an acronym for Curriculum Analysis Tools, was released for use by over 30 of the consortium's 60 member institutions, while the remainder either waited for the Macintosh (TM) or Windows (TM) versions of the program or were simply not ready to begin an institutional curriculum analysis project. Shortly after this release, the design specifications were rewritten based on a thorough critique of the Beta 2.5 design and coding structures and user feedback. The result was Beta 3.0 which has been designed to accommodate any health professions curriculum, in any country that uses English or French as one of its languages. Given the program's extensive use of screen generation tools, it was quite easy to offer screen displays in a second language. As more languages become available as part of the Unified Medical Language System, used to document curriculum content, the program's design will allow their incorporation. When the software arrives at a new institution, the choice of language and health profession will have been preselected, leaving the Curriculum Database Manager to identify the country where the member institution is located. With these 'macro' end-user decisions completed, the database manager can turn to a more specific set of end-user questions including: 1) will the curriculum view selected for analysis be created by the course directors (provider entry of structured course outlines) or by the students (consumer entry of class session summaries)?; 2) which elements within the provided course outline or class session modules will be used?; 3) which, if any, internal curriculum validation measures will be included?; and 4) which, if any, external validation measures will be included. External measures can include accreditation standards, entry-level practitioner competencies, an index of learning behaviors, an index of discipline integration, or others defined by the institution. When data entry, which is secure to the course level, is complete users may choose to browse a variety of graphic representations of their curriculum, or either preview or print a variety of reports that offer more detail about the content and adequacy of their curriculum. The progress of all data entry can be monitored by the database manager over the course of an academic year, and all reports contain extensive missing data reports to ensure that the user knows whether they are studying complete or partial data. Institutions using the beta version of the program have reported considerable satisfaction with its functionality and have also offered a variety of design and interface enhancements. The anticipated release date for Curriculum Analysis Tools (CATs) is the first quarter of 1995.
Ang, Darwin N; Behrns, Kevin E
2013-07-01
The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.
IDOMAL: an ontology for malaria.
Topalis, Pantelis; Mitraka, Elvira; Bujila, Ioana; Deligianni, Elena; Dialynas, Emmanuel; Siden-Kiamos, Inga; Troye-Blomberg, Marita; Louis, Christos
2010-08-10
Ontologies are rapidly becoming a necessity for the design of efficient information technology tools, especially databases, because they permit the organization of stored data using logical rules and defined terms that are understood by both humans and machines. This has as consequence both an enhanced usage and interoperability of databases and related resources. It is hoped that IDOMAL, the ontology of malaria will prove a valuable instrument when implemented in both malaria research and control measures. The OBOEdit2 software was used for the construction of the ontology. IDOMAL is based on the Basic Formal Ontology (BFO) and follows the rules set by the OBO Foundry consortium. The first version of the malaria ontology covers both clinical and epidemiological aspects of the disease, as well as disease and vector biology. IDOMAL is meant to later become the nucleation site for a much larger ontology of vector borne diseases, which will itself be an extension of a large ontology of infectious diseases (IDO). The latter is currently being developed in the frame of a large international collaborative effort. IDOMAL, already freely available in its first version, will form part of a suite of ontologies that will be used to drive IT tools and databases specifically constructed to help control malaria and, later, other vector-borne diseases. This suite already consists of the ontology described here as well as the one on insecticide resistance that has been available for some time. Additional components are being developed and introduced into IDOMAL.
ASEAN Mineral Database and Information System (AMDIS)
NASA Astrophysics Data System (ADS)
Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.
2014-12-01
AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.
NASA Astrophysics Data System (ADS)
Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.
2017-12-01
The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.
Jamal, Mamdoh T; Pugazhendi, Arulazhagan
2018-06-01
A halophilic bacterial consortium was enriched from Red Sea saline water and sediment samples collected from Abhor, Jeddah, Saudi Arabia. The consortium potentially degraded different low (above 90% for phenanthrene and fluorene) and high (69 ± 1.4 and 56 ± 1.8% at 50 and 100 mg/L of pyrene) molecular weight polycyclic aromatic hydrocarbons (PAHs) at different concentrations under saline condition (40 g/L NaCl concentration). The cell hydrophobicity (91° ± 1°) and biosurfactant production (30 mN/m) confirmed potential bacterial cell interaction with PAHs to facilitate biodegradation process. Co-metabolic study with phenanthrene as co-substrate during pyrene degradation recorded 90% degradation in 12 days. The consortium in continuous stirred tank reactor with petroleum refinery wastewater showed complete and 90% degradation of low and high molecular weight PAHs, respectively. The reactor study also revealed 94 ± 1.8% chemical oxygen demand removal by the halophilic consortium under saline condition (40 g/L NaCl concentration). The halophilic bacterial strains present in the consortium were identified as Ochrobactrum halosaudis strain CEES1 (KX377976), Stenotrophomonas maltophilia strain CEES2 (KX377977), Achromobacter xylosoxidans strain CEES3 (KX377978) and Mesorhizobium halosaudis strain CEES4 (KX377979). Thus, the promising halophilic consortium was highly recommended to be employed in petroleum saline wastewater treatment process.
Muangchinda, Chanokporn; Rungsihiranrut, Adisan; Prombutara, Pinidphon; Soonglerdsongpha, Suwat; Pinyakong, Onruthai
2018-05-29
A bacterial consortium, named SWO, was enriched from crude oil-contaminated seawater from Phrao Bay in Rayong Province, Thailand, after a large oil spill in 2013. The bacterial consortium degraded a polycyclic aromatic hydrocarbon (PAH) mixture consisting of phenanthrene, anthracene, fluoranthene, and pyrene (50 mg L -1 each) by approximately 73%, 69%, 52%, and 48%, respectively, within 21 days. This consortium exhibited excellent adaptation to a wide range of environmental conditions. It could degrade a mixture of four PAHs under a range of pH values (4.0-9.0), temperatures (25 °C-37 °C), and salinities (0-10 g L -1 with NaCl). In addition, this consortium degraded 20-30% of benzo[a]pyrene and perylene (10 mg L -1 each), high molecular weight PAHs, in the presence of other PAHs within 35 days, and degraded 40% of 2% (v/v) crude oil within 20 days. The 16S rRNA gene amplicon sequencing analysis demonstrated that Pseudomonas and Methylophaga were the dominant genera of consortium SWO in almost all treatments, while Pseudidiomarina, Thalassospira and Alcanivorax were predominant under higher salt concentrations. Moreover, Pseudomonas and Alcanivorax were dominant in the crude oil-degradation treatment. Our results suggest that the consortium SWO maintained its biodegradation ability by altering the bacterial community profile upon encountering changes in the environmental conditions. Copyright © 2018 Elsevier B.V. All rights reserved.
Bai, Naling; Abuduaini, Rexiding; Wang, Sheng; Zhang, Meinan; Zhu, Xufen; Zhao, Yuhua
2017-01-01
Nonylphenol (NP), ubiquitously detected as the degradation product of nonionic surfactants nonylphenol polyethoxylates, has been reported as an endocrine disrupter. However, most pure microorganisms can degrade only limited species of NP with low degradation efficiencies. To establish a microbial consortium that can effectively degrade different forms of NP, in this study, we isolated a facultative microbial consortium NP-M2 and characterized the biodegradation of NP by it. NP-M2 could degrade 75.61% and 89.75% of 1000 mg/L NP within 48 h and 8 days, respectively; an efficiency higher than that of any other consortium or pure microorganism reported so far. The addition of yeast extract promoted the biodegradation more significantly than that of glucose. Moreover, surface-active compounds secreted into the extracellular environment were hypothesized to promote high-efficiency metabolism of NP. The detoxification of NP by this consortium was determined. The degradation pathway was hypothesized to be initiated by oxidization of the benzene ring, followed by step-wise side-chain biodegradation. The bacterial composition of NP-M2 was determined using 16S rDNA library, and the consortium was found to mainly comprise members of the Sphingomonas, Pseudomonas, Alicycliphilus, and Acidovorax genera, with the former two accounting for 86.86% of the consortium. The high degradation efficiency of NP-M2 indicated that it could be a promising candidate for NP bioremediation in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ethun, Cecilia G; Lopez-Aguiar, Alexandra G; Pawlik, Timothy M; Poultsides, George; Idrees, Kamran; Fields, Ryan C; Weber, Sharon M; Cho, Clifford; Martin, Robert C; Scoggins, Charles R; Shen, Perry; Schmidt, Carl; Hatzaras, Ioannis; Bentrem, David; Ahmad, Syed; Abbott, Daniel; Kim, Hong Jin; Merchant, Nipun; Staley, Charles A; Kooby, David A; Maithel, Shishir K
2017-04-01
Distal cholangiocarcinoma (DC) and pancreatic ductal adenocarcinoma (PDAC) are often managed as 1 entity, yet direct comparisons are lacking. Our aim was to use 2 large multi-institutional databases to assess treatment, pathologic, and survival differences between these diseases. This study included patients with DC and PDAC who underwent curative-intent pancreaticoduodenectomy from 2000 to 2015 at 13 institutions comprising the US Extrahepatic Biliary Malignancy and Central Pancreas Consortiums. Primary endpoint was disease-specific survival (DSS). Of 1,463 patients, 224 (15%) had DC and 1,239 (85%) had PDAC. Compared with PDAC, DC patients were less likely to be margin-positive (19% vs 25%; p = 0.005), lymph node (LN)-positive (55% vs 69%; p < 0.001), and receive adjuvant therapy (57% vs 71%; p < 0.001). Of DC patients treated with adjuvant therapy, 62% got gemcitabine alone and 16% got gemcitabine/cisplatin. Distal cholangiocarcinoma was associated with improved median DSS (40 months) compared with PDAC (22 months; p < 0.001), which persisted on multivariable analysis (hazard ratio 0.65; 95% CI 0.50 to 0.84; p = 0.001). Lymph node involvement was the only factor independently associated with decreased DSS for both DC and PDAC. The DC/LN-positive patients had similar DSS as PDAC/LN-negative patients (p = 0.74). Adjuvant therapy (chemotherapy ± radiation) was associated with improved median DSS for PDAC/LN-positive patients (21 vs 13 months; p = 0.001), but not for DC patients (38 vs 40 months; p = 0.62), regardless of LN status. Distal cholangiocarcinoma and pancreatic ductal adenocarcinoma are distinct entities. Distal cholangiocarcinoma has a favorable prognosis compared with PDAC, yet current adjuvant therapy regimens are only associated with improved survival in PDAC, not DC. Therefore, treatment paradigms used for PDAC should not be extrapolated to DC, despite similar operative approaches, and novel therapies for DC should be explored. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Infrastructure for the life sciences: design and implementation of the UniProt website.
Jain, Eric; Bairoch, Amos; Duvaud, Severine; Phan, Isabelle; Redaschi, Nicole; Suzek, Baris E; Martin, Maria J; McGarvey, Peter; Gasteiger, Elisabeth
2009-05-08
The UniProt consortium was formed in 2002 by groups from the Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) at Georgetown University, and soon afterwards the website http://www.uniprot.org was set up as a central entry point to UniProt resources. Requests to this address were redirected to one of the three organisations' websites. While these sites shared a set of static pages with general information about UniProt, their pages for searching and viewing data were different. To provide users with a consistent view and to cut the cost of maintaining three separate sites, the consortium decided to develop a common website for UniProt. Following several years of intense development and a year of public beta testing, the http://www.uniprot.org domain was switched to the newly developed site described in this paper in July 2008. The UniProt consortium is the main provider of protein sequence and annotation data for much of the life sciences community. The http://www.uniprot.org website is the primary access point to this data and to documentation and basic tools for the data. These tools include full text and field-based text search, similarity search, multiple sequence alignment, batch retrieval and database identifier mapping. This paper discusses the design and implementation of the new website, which was released in July 2008, and shows how it improves data access for users with different levels of experience, as well as to machines for programmatic access.http://www.uniprot.org/ is open for both academic and commercial use. The site was built with open source tools and libraries. Feedback is very welcome and should be sent to help@uniprot.org. The new UniProt website makes accessing and understanding UniProt easier than ever. The two main lessons learned are that getting the basics right for such a data provider website has huge benefits, but is not trivial and easy to underestimate, and that there is no substitute for using empirical data throughout the development process to decide on what is and what is not working for your users.
Beauchet, Olivier; Allali, Gilles; Sekhon, Harmehr; Verghese, Joe; Guilain, Sylvie; Steinmetz, Jean-Paul; Kressig, Reto W.; Barden, John M.; Szturm, Tony; Launay, Cyrille P.; Grenier, Sébastien; Bherer, Louis; Liu-Ambrose, Teresa; Chester, Vicky L.; Callisaya, Michele L.; Srikanth, Velandai; Léonard, Guillaume; De Cock, Anne-Marie; Sawa, Ryuichi; Duque, Gustavo; Camicioli, Richard; Helbostad, Jorunn L.
2017-01-01
Background: Gait disorders, a highly prevalent condition in older adults, are associated with several adverse health consequences. Gait analysis allows qualitative and quantitative assessments of gait that improves the understanding of mechanisms of gait disorders and the choice of interventions. This manuscript aims (1) to give consensus guidance for clinical and spatiotemporal gait analysis based on the recorded footfalls in older adults aged 65 years and over, and (2) to provide reference values for spatiotemporal gait parameters based on the recorded footfalls in healthy older adults free of cognitive impairment and multi-morbidities. Methods: International experts working in a network of two different consortiums (i.e., Biomathics and Canadian Gait Consortium) participated in this initiative. First, they identified items of standardized information following the usual procedure of formulation of consensus findings. Second, they merged databases including spatiotemporal gait assessments with GAITRite® system and clinical information from the “Gait, cOgnitiOn & Decline” (GOOD) initiative and the Generation 100 (Gen 100) study. Only healthy—free of cognitive impairment and multi-morbidities (i.e., ≤ 3 therapeutics taken daily)—participants aged 65 and older were selected. Age, sex, body mass index, mean values, and coefficients of variation (CoV) of gait parameters were used for the analyses. Results: Standardized systematic assessment of three categories of items, which were demographics and clinical information, and gait characteristics (clinical and spatiotemporal gait analysis based on the recorded footfalls), were selected for the proposed guidelines. Two complementary sets of items were distinguished: a minimal data set and a full data set. In addition, a total of 954 participants (mean age 72.8 ± 4.8 years, 45.8% women) were recruited to establish the reference values. Performance of spatiotemporal gait parameters based on the recorded footfalls declined with increasing age (mean values and CoV) and demonstrated sex differences (mean values). Conclusions: Based on an international multicenter collaboration, we propose consensus guidelines for gait assessment and spatiotemporal gait analysis based on the recorded footfalls, and reference values for healthy older adults. PMID:28824393
HPIDB 2.0: a curated database for host–pathogen interactions
Ammari, Mais G.; Gresham, Cathy R.; McCarthy, Fiona M.; Nanduri, Bindu
2016-01-01
Identification and analysis of host–pathogen interactions (HPI) is essential to study infectious diseases. However, HPI data are sparse in existing molecular interaction databases, especially for agricultural host–pathogen systems. Therefore, resources that annotate, predict and display the HPI that underpin infectious diseases are critical for developing novel intervention strategies. HPIDB 2.0 (http://www.agbase.msstate.edu/hpi/main.html) is a resource for HPI data, and contains 45, 238 manually curated entries in the current release. Since the first description of the database in 2010, multiple enhancements to HPIDB data and interface services were made that are described here. Notably, HPIDB 2.0 now provides targeted biocuration of molecular interaction data. As a member of the International Molecular Exchange consortium, annotations provided by HPIDB 2.0 curators meet community standards to provide detailed contextual experimental information and facilitate data sharing. Moreover, HPIDB 2.0 provides access to rapidly available community annotations that capture minimum molecular interaction information to address immediate researcher needs for HPI network analysis. In addition to curation, HPIDB 2.0 integrates HPI from existing external sources and contains tools to infer additional HPI where annotated data are scarce. Compared to other interaction databases, our data collection approach ensures HPIDB 2.0 users access the most comprehensive HPI data from a wide range of pathogens and their hosts (594 pathogen and 70 host species, as of February 2016). Improvements also include enhanced search capacity, addition of Gene Ontology functional information, and implementation of network visualization. The changes made to HPIDB 2.0 content and interface ensure that users, especially agricultural researchers, are able to easily access and analyse high quality, comprehensive HPI data. All HPIDB 2.0 data are updated regularly, are publically available for direct download, and are disseminated to other molecular interaction resources. Database URL: http://www.agbase.msstate.edu/hpi/main.html PMID:27374121
Irinyi, Laszlo; Serena, Carolina; Garcia-Hermoso, Dea; Arabatzis, Michael; Desnos-Ollivier, Marie; Vu, Duong; Cardinali, Gianluigi; Arthur, Ian; Normand, Anne-Cécile; Giraldo, Alejandra; da Cunha, Keith Cassia; Sandoval-Denis, Marcelo; Hendrickx, Marijke; Nishikaku, Angela Satie; de Azevedo Melo, Analy Salles; Merseguel, Karina Bellinghausen; Khan, Aziza; Parente Rocha, Juliana Alves; Sampaio, Paula; da Silva Briones, Marcelo Ribeiro; e Ferreira, Renata Carmona; de Medeiros Muniz, Mauro; Castañón-Olivares, Laura Rosio; Estrada-Barcenas, Daniel; Cassagne, Carole; Mary, Charles; Duan, Shu Yao; Kong, Fanrong; Sun, Annie Ying; Zeng, Xianyu; Zhao, Zuotao; Gantois, Nausicaa; Botterel, Françoise; Robbertse, Barbara; Schoch, Conrad; Gams, Walter; Ellis, David; Halliday, Catriona; Chen, Sharon; Sorrell, Tania C; Piarroux, Renaud; Colombo, Arnaldo L; Pais, Célia; de Hoog, Sybren; Zancopé-Oliveira, Rosely Maria; Taylor, Maria Lucia; Toriello, Conchita; de Almeida Soares, Célia Maria; Delhaes, Laurence; Stubbe, Dirk; Dromer, Françoise; Ranque, Stéphane; Guarro, Josep; Cano-Lira, Jose F; Robert, Vincent; Velegraki, Aristea; Meyer, Wieland
2015-05-01
Human and animal fungal pathogens are a growing threat worldwide leading to emerging infections and creating new risks for established ones. There is a growing need for a rapid and accurate identification of pathogens to enable early diagnosis and targeted antifungal therapy. Morphological and biochemical identification methods are time-consuming and require trained experts. Alternatively, molecular methods, such as DNA barcoding, a powerful and easy tool for rapid monophasic identification, offer a practical approach for species identification and less demanding in terms of taxonomical expertise. However, its wide-spread use is still limited by a lack of quality-controlled reference databases and the evolving recognition and definition of new fungal species/complexes. An international consortium of medical mycology laboratories was formed aiming to establish a quality controlled ITS database under the umbrella of the ISHAM working group on "DNA barcoding of human and animal pathogenic fungi." A new database, containing 2800 ITS sequences representing 421 fungal species, providing the medical community with a freely accessible tool at http://www.isham.org/ and http://its.mycologylab.org/ to rapidly and reliably identify most agents of mycoses, was established. The generated sequences included in the new database were used to evaluate the variation and overall utility of the ITS region for the identification of pathogenic fungi at intra-and interspecies level. The average intraspecies variation ranged from 0 to 2.25%. This highlighted selected pathogenic fungal species, such as the dermatophytes and emerging yeast, for which additional molecular methods/genetic markers are required for their reliable identification from clinical and veterinary specimens. © The Author 2015. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reasonable âwind up costsâ incurred after the effective date of retrocession? 1000.316 Section 1000.316... Reassumption § 1000.316 May the Tribe/Consortium be reimbursed for actual and reasonable “wind up costs” incurred after the effective date of retrocession? Yes, the Tribe/Consortium may be reimbursed for actual...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-20
... Production Act of 1993--3D PDF Consortium, Inc. Notice is hereby given that, on March 27, 2012, pursuant to.... (``the Act''), 3D PDF Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with... Corporation, Boulder, CO; Aras Corporation, Andover, MA; Tetra 4D, LLC, Seattle, WA; Tech Soft 3D, Berkeley...
Legacy System Engineering, VPERC Consortium
2009-09-01
REPORT Legacy System Engineering, VPERC Consortium, Final Report, University of Utah for Work Ending Joly 15, 2009. 14. ABSTRACT 16. SECURITY...Engineering, VPERC Consortium, Final Report, University of Utah for Work Ending Joly 15, 2009. Report Title ABSTRACT This paper is one of three...Sons, 1995. [3] Turner MJ, Clough RW, Martin HC, Topp LJ. “Stiffness and deflection analysis of complex structures.” Journal of the Aeronautical
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
... Production Act of 1993--3D PDF Consortium, Inc. Notice is hereby given that, on June 4, 2012, pursuant to.... (``the Act''), 3D PDF Consortium, Inc. (``3D PDF'') has filed written notifications simultaneously with... research project. Membership in this group research project remains open, and 3D PDF intends to file...