Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2011-02-15
Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less
Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim
2004-04-01
Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.
Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol
Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.
2009-01-01
One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R
2015-05-01
We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Scientific Use Cases for the Virtual Atomic and Molecular Data Center
NASA Astrophysics Data System (ADS)
Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.
2014-12-01
VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.
NASA Astrophysics Data System (ADS)
Hancock, Matthew C.; Magnan, Jerry F.
2017-03-01
To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
Dictionary learning-based CT detection of pulmonary nodules
NASA Astrophysics Data System (ADS)
Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong
2016-10-01
Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo
2016-01-01
Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.
Classification of pulmonary nodules in lung CT images using shape and texture features
NASA Astrophysics Data System (ADS)
Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla
2016-03-01
Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Sleep atlas and multimedia database.
Penzel, T; Kesper, K; Mayer, G; Zulley, J; Peter, J H
2000-01-01
The ENN sleep atlas and database was set up on a dedicated server connected to the internet thus providing all services such as WWW, ftp and telnet access. The database serves as a platform to promote the goals of the European Neurological Network, to exchange patient cases for second opinion between experts and to create a case-oriented multimedia sleep atlas with descriptive text, images and video-clips of all known sleep disorders. The sleep atlas consists of a small public and a large private part for members of the consortium. 20 patient cases were collected and presented with educational information similar to published case reports. Case reports are complemented with images, video-clips and biosignal recordings. A Java based viewer for biosignals provided in EDF format was installed in order to move free within the sleep recordings without the need to download the full recording on the client.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less
de Sousa Costa, Robherson Wector; da Silva, Giovanni Lucca França; de Carvalho Filho, Antonio Oseas; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo
2018-05-23
Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. Therefore, this study proposes a methodology for diagnosis of lung nodules in benign and malignant tumors based on image processing and pattern recognition techniques. Mean phylogenetic distance (MPD) and taxonomic diversity index (Δ) were used as texture descriptors. Finally, the genetic algorithm in conjunction with the support vector machine were applied to select the best training model. The proposed methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 93.42%, specificity of 91.21%, accuracy of 91.81%, and area under the ROC curve of 0.94. The results demonstrate the promising performance of texture extraction techniques using mean phylogenetic distance and taxonomic diversity index combined with phylogenetic trees. Graphical Abstract Stages of the proposed methodology.
Remote sensing and GIS technology in the Global Land Ice Measurements from Space (GLIMS) Project
Raup, B.; Kääb, Andreas; Kargel, J.S.; Bishop, M.P.; Hamilton, G.; Lee, E.; Paul, F.; Rau, F.; Soltesz, D.; Khalsa, S.J.S.; Beedle, M.; Helm, C.
2007-01-01
Global Land Ice Measurements from Space (GLIMS) is an international consortium established to acquire satellite images of the world's glaciers, analyze them for glacier extent and changes, and to assess these change data in terms of forcings. The consortium is organized into a system of Regional Centers, each of which is responsible for glaciers in their region of expertise. Specialized needs for mapping glaciers in a distributed analysis environment require considerable work developing software tools: terrain classification emphasizing snow, ice, water, and admixtures of ice with rock debris; change detection and analysis; visualization of images and derived data; interpretation and archival of derived data; and analysis to ensure consistency of results from different Regional Centers. A global glacier database has been designed and implemented at the National Snow and Ice Data Center (Boulder, CO); parameters have been expanded from those of the World Glacier Inventory (WGI), and the database has been structured to be compatible with (and to incorporate) WGI data. The project as a whole was originated, and has been coordinated by, the US Geological Survey (Flagstaff, AZ), which has also led the development of an interactive tool for automated analysis and manual editing of glacier images and derived data (GLIMSView). This article addresses remote sensing and Geographic Information Science techniques developed within the framework of GLIMS in order to fulfill the goals of this distributed project. Sample applications illustrating the developed techniques are also shown. ?? 2006 Elsevier Ltd. All rights reserved.
Heng, Daniel Y C; Xie, Wanling; Regan, Meredith M; Harshman, Lauren C; Bjarnason, Georg A; Vaishampayan, Ulka N; Mackenzie, Mary; Wood, Lori; Donskov, Frede; Tan, Min-Han; Rha, Sun-Young; Agarwal, Neeraj; Kollmannsberger, Christian; Rini, Brian I; Choueiri, Toni K
2014-01-01
Summary Background The International Metastatic Renal-Cell Carcinoma Database Consortium model offers prognostic information for patients with metastatic renal-cell carcinoma. We tested the accuracy of the model in an external population and compared it with other prognostic models. Methods We included patients with metastatic renal-cell carcinoma who were treated with first-line VEGF-targeted treatment at 13 international cancer centres and who were registered in the Consortium’s database but had not contributed to the initial development of the Consortium Database model. The primary endpoint was overall survival. We compared the Database Consortium model with the Cleveland Clinic Foundation (CCF) model, the International Kidney Cancer Working Group (IKCWG) model, the French model, and the Memorial Sloan-Kettering Cancer Center (MSKCC) model by concordance indices and other measures of model fit. Findings Overall, 1028 patients were included in this study, of whom 849 had complete data to assess the Database Consortium model. Median overall survival was 18·8 months (95% 17·6–21·4). The predefined Database Consortium risk factors (anaemia, thrombocytosis, neutrophilia, hypercalcaemia, Karnofsky performance status <80%, and <1 year from diagnosis to treatment) were independent predictors of poor overall survival in the external validation set (hazard ratios ranged between 1·27 and 2·08, concordance index 0·71, 95% CI 0·68–0·73). When patients were segregated into three risk categories, median overall survival was 43·2 months (95% CI 31·4–50·1) in the favourable risk group (no risk factors; 157 patients), 22·5 months (18·7–25·1) in the intermediate risk group (one to two risk factors; 440 patients), and 7·8 months (6·5–9·7) in the poor risk group (three or more risk factors; 252 patients; p<0·0001; concordance index 0·664, 95% CI 0·639–0·689). 672 patients had complete data to test all five models. The concordance index of the CCF model was 0·662 (95% CI 0·636–0·687), of the French model 0·640 (0·614–0·665), of the IKCWG model 0·668 (0·645–0·692), and of the MSKCC model 0·657 (0·632–0·682). The reported versus predicted number of deaths at 2 years was most similar in the Database Consortium model compared with the other models. Interpretation The Database Consortium model is now externally validated and can be applied to stratify patients by risk in clinical trials and to counsel patients about prognosis. PMID:23312463
Hancock, Matthew C.; Magnan, Jerry F.
2016-01-01
Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453
Hancock, Matthew C; Magnan, Jerry F
2016-10-01
In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Ahmadi, Farshid Farnood; Ebadi, Hamid
2009-01-01
3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.
Resources | Division of Cancer Prevention
Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |
NASA Astrophysics Data System (ADS)
Pathak, S. K.; Deshpande, N. J.
2007-10-01
The present scenario of the INDEST Consortium among engineering, science and technology (including astronomy and astrophysics) libraries in India is discussed. The Indian National Digital Library in Engineering Sciences & Technology (INDEST) Consortium is a major initiative of the Ministry of Human Resource Development, Government of India. The INDEST Consortium provides access to 16 full text e-resources and 7 bibliographic databases for 166 institutions as members who are taking advantage of cost effective access to premier resources in engineering, science and technology, including astronomy and astrophysics. Member institutions can access over 6500 e-journals from 1092 publishers. Out of these, over 150 e-journals are exclusively for the astronomy and physics community. The current study also presents a comparative analysis of the key features of nine major services, viz. ACM Digital Library, ASCE Journals, ASME Journals, EBSCO Databases (Business Source Premier), Elsevier's Science Direct, Emerald Full Text, IEEE/IEE Electronic Library Online (IEL), ProQuest ABI/INFORM and Springer Verlag's Link. In this paper, the limitations of this consortium are also discussed.
[Activity of NTDs Drug-discovery Research Consortium].
Namatame, Ichiji
2016-01-01
Neglected tropical diseases (NTDs) are an extremely important issue facing global health care. To improve "access to health" where people are unable to access adequate medical care due to poverty and weak healthcare systems, we have established two consortiums: the NTD drug discovery research consortium, and the pediatric praziquantel consortium. The NTD drug discovery research consortium, which involves six institutions from industry, government, and academia, as well as an international non-profit organization, is committed to developing anti-protozoan active compounds for three NTDs (Leishmaniasis, Chagas disease, and African sleeping sickness). Each participating institute will contribute their efforts to accomplish the following: selection of drug targets based on information technology, and drug discovery by three different approaches (in silico drug discovery, "fragment evolution" which is a unique drug designing method of Astellas Pharma, and phenotypic screening with Astellas' compound library). The consortium has established a brand new database (Integrated Neglected Tropical Disease Database; iNTRODB), and has selected target proteins for the in silico and fragment evolution drug discovery approaches. Thus far, we have identified a number of promising compounds that inhibit the target protein, and we are currently trying to improve the anti-protozoan activity of these compounds. The pediatric praziquantel consortium was founded in July 2012 to develop and register a new praziquantel pediatric formulation for the treatment of schistosomiasis. Astellas Pharma has been a core member in this consortium since its establishment, and has provided expertise and technology in the area of pediatric formulation development and clinical development.
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative
Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-01-01
Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods: We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative.
Won, Brian; Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-03-16
An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium.
Castro, Alfonso; Boveda, Carmen; Arcay, Bernardino; Sanjurjo, Pedro
2016-01-01
The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium). PMID:27517049
Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew
2012-10-01
The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.
ERIC Educational Resources Information Center
Painter, Derrick
1996-01-01
Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)
The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth".
Armato, Samuel G; Roberts, Rachael Y; McNitt-Gray, Michael F; Meyer, Charles R; Reeves, Anthony P; McLennan, Geoffrey; Engelmann, Roger M; Bland, Peyton H; Aberle, Denise R; Kazerooni, Ella A; MacMahon, Heber; van Beek, Edwin J R; Yankelevitz, David; Croft, Barbara Y; Clarke, Laurence P
2007-12-01
Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish "truth" for algorithm development, training, and testing. The integrity of this "truth," however, must be established before investigators commit to this "gold standard" as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the "truth" collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the "blinded read phase"), radiologists independently identified and annotated lesions, assigning each to one of three categories: "nodule >or=3 mm," "nodule <3 mm," or "non-nodule >or=3 mm." For the second read (the "unblinded read phase"), the same radiologists independently evaluated the same CT scans, but with all of the annotations from the previously performed blinded reads presented; each radiologist could add to, edit, or delete their own marks; change the lesion category of their own marks; or leave their marks unchanged. The post-unblinded read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of identification of potential errors introduced during the complete image annotation process and correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. The establishment of "truth" must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems.
Adams, Hieab H H; Hilal, Saima; Schwingenschuh, Petra; Wittfeld, Katharina; van der Lee, Sven J; DeCarli, Charles; Vernooij, Meike W; Katschnig-Winter, Petra; Habes, Mohamad; Chen, Christopher; Seshadri, Sudha; van Duijn, Cornelia M; Ikram, M Kamran; Grabe, Hans J; Schmidt, Reinhold; Ikram, M Arfan
2015-12-01
Virchow-Robin spaces (VRS), or perivascular spaces, are compartments of interstitial fluid enclosing cerebral blood vessels and are potential imaging markers of various underlying brain pathologies. Despite a growing interest in the study of enlarged VRS, the heterogeneity in rating and quantification methods combined with small sample sizes have so far hampered advancement in the field. The Uniform Neuro-Imaging of Virchow-Robin Spaces Enlargement (UNIVRSE) consortium was established with primary aims to harmonize rating and analysis (www.uconsortium.org). The UNIVRSE consortium brings together 13 (sub)cohorts from five countries, totaling 16,000 subjects and over 25,000 scans. Eight different magnetic resonance imaging protocols were used in the consortium. VRS rating was harmonized using a validated protocol that was developed by the two founding members, with high reliability independent of scanner type, rater experience, or concomitant brain pathology. Initial analyses revealed risk factors for enlarged VRS including increased age, sex, high blood pressure, brain infarcts, and white matter lesions, but this varied by brain region. Early collaborative efforts between cohort studies with respect to data harmonization and joint analyses can advance the field of population (neuro)imaging. The UNIVRSE consortium will focus efforts on other potential correlates of enlarged VRS, including genetics, cognition, stroke, and dementia.
Massage Therapy for Health Purposes
... Web site: www.nih.gov/health/clinicaltrials/ Cochrane Database of Systematic Reviews The Cochrane Database of Systematic ... Licensed Complementary and Alternative Healthcare Professions. Seattle, WA: Academic Consortium for Complementary and Alternative Health Care; 2009. ...
Martin, Tiphaine; Sherman, David J; Durrens, Pascal
2011-01-01
The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Computer aided lung cancer diagnosis with deep learning algorithms
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Qian, Wei
2016-03-01
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
The NCI Division of Cancer Prevention awarded eight grants to create the Consortium for Imaging and Biomarkers (CIB) on August 3, 2015. | 8 lead investigators combining imaging methods for the visualization of lesions with biomarkers to improve the accuracy of screening, early cancer detection, and the diagnosis of early stage cancers.
NASA Astrophysics Data System (ADS)
Barache, C.; Bouquillon, S.; Carlucci, T.; Taris, F.; Michel, L.; Altmann, M.
2013-10-01
The Ground Based Optical Tracking (GBOT) group is a part of the Data Processing and Analysis Consortium, the large consortium of over 400 scientists from many European countries, charged with the scientific conduction of the Gaia mission by ESA. The GBOT group is in charge of the optical part of tracking of the Gaia satellite. This optical tracking is necessary to allow the Gaia mission to fully reach its goal in terms of astrometry precision level. These observations will be done daily, during the 5 years of the mission, with the use of optical CCD frames taken by a small network of 1-2m class telescopes located all over the world. The requirements for the accuracy on the satellite position determination, with respect of the stars in the field of view, are 20 mas. These optical satellite positions will be sent weekly by GBOT to the SOC of ESAC and used with other kinds of observations (radio ranging and Doppler) by MOC of ESOC to improve the Gaia ephemeris. For this purpose, we developed a set of accurate astrometry reduction programs specially adapted for tracking moving objects. The inputs of these programs for each tracked target are an ephemeris and a set of FITS images. The outputs for each image are: a file containing all information about the detected objects, a catalogue file used for calibration, a TIFF file for visual explanation of the reduction result, and an improvement of the fits image header. The final result is an overview file containing only the data related to the target extracted from all the images. These programs are written in GNU Fortran 95 and provide results in VOTable format (supported by Virtual Observatory protocols). All these results are sent automatically into the GBOT Database which is built with the SAADA freeware. The user of this Database can archive and query the data but also, thanks to the delegate option provided by SAADA, select a set of images and directly run the GBOT reduction programs with a dedicated Web interface. For more information about SAADA (an Automatic System for Astronomy Data Archive under GPL license and VOcompatible), see the related paper Michel et al. (2013).
Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.
2015-01-01
We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431
ERIC Educational Resources Information Center
Berman, Paul; And Others
This first-year report of the National Effective Transfer Consortium (NETC) summarizes the progress made by the member colleges in creating standardized measures of actual and expected transfer rates and of transfer effectiveness, and establishing a database that would enable valid comparisons among NETC colleges. Following background information…
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia; Hou, Beibei; Zhang, Ningbo
2017-01-01
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492
Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S
2015-01-01
We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.
Broglio, Steven P; McCrea, Michael; McAllister, Thomas; Harezlak, Jaroslaw; Katz, Barry; Hack, Dallas; Hainline, Brian
2017-07-01
The natural history of mild traumatic brain injury (TBI) or concussion remains poorly defined and no objective biomarker of physiological recovery exists for clinical use. The National Collegiate Athletic Association (NCAA) and the US Department of Defense (DoD) established the Concussion Assessment, Research and Education (CARE) Consortium to study the natural history of clinical and neurobiological recovery after concussion in the service of improved injury prevention, safety and medical care for student-athletes and military personnel. The objectives of this paper were to (i) describe the background and driving rationale for the CARE Consortium; (ii) outline the infrastructure of the Consortium policies, procedures, and governance; (iii) describe the longitudinal 6-month clinical and neurobiological study methodology; and (iv) characterize special considerations in the design and implementation of a multicenter trial. Beginning Fall 2014, CARE Consortium institutions have recruited and enrolled 23,533 student-athletes and military service academy students (approximately 90% of eligible student-athletes and cadets; 64.6% male, 35.4% female). A total of 1174 concussions have been diagnosed in participating subjects, with both concussion and baseline cases deposited in the Federal Interagency Traumatic Brain Injury Research (FITBIR) database. Challenges have included coordinating regulatory issues across civilian and military institutions, operationalizing study procedures, neuroimaging protocol harmonization across sites and platforms, construction and maintenance of a relational database, and data quality and integrity monitoring. The NCAA-DoD CARE Consortium represents a comprehensive investigation of concussion in student-athletes and military service academy students. The richly characterized study sample and multidimensional approach provide an opportunity to advance the field of concussion science, not only among student athletes but in all populations at risk for mild TBI.
Completion of the National Land Cover Database (NLCD) 1992-2001 Land Cover Change Retrofit Product
The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods betwe...
Wilson, Robert; McGuire, Christina; Mohun, Timothy
2016-01-01
The Deciphering the Mechanisms of Developmental Disorders (DMDD) consortium is a research programme set up to identify genes in the mouse, which if mutated (or knocked-out) result in embryonic lethality when homozygous, and initiate the study of why disruption of their function has such profound effects on embryo development and survival. The project uses a combination of comprehensive high resolution 3D imaging and tissue histology to identify abnormalities in embryo and placental structures of embryonic lethal lines. The image data we have collected and the phenotypes scored are freely available through the project website (http://dmdd.org.uk). In this article we describe the web interface to the images that allows the embryo data to be viewed at full resolution in different planes, discuss how to search the database for a phenotype, and our approach to organising the data for an embryo and a mutant line so it is easy to comprehend and intuitive to navigate. PMID:26519470
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.
Completion of the 2006 National Land Cover Database Update for the Conterminous United States
Under the organization of the Multi-Resolution Land Characteristics (MRLC) Consortium, the National Land Cover Database (NLCD) has been updated to characterize both land cover and land cover change from 2001 to 2006. An updated version of NLCD 2001 (Version 2.0) is also provided....
A novel computer-aided detection system for pulmonary nodule identification in CT images
NASA Astrophysics Data System (ADS)
Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong
2014-03-01
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
NASA Technical Reports Server (NTRS)
Hjellming, R. M.
1992-01-01
AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.
Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium
NASA Astrophysics Data System (ADS)
Karwoski, Ronald A.; Bartholmai, Brian; Zavaletta, Vanessa A.; Holmes, David; Robb, Richard A.
2008-03-01
The goal of Lung Tissue Resource Consortium (LTRC) is to improve the management of diffuse lung diseases through a better understanding of the biology of Chronic Obstructive Pulmonary Disease (COPD) and fibrotic interstitial lung disease (ILD) including Idiopathic Pulmonary Fibrosis (IPF). Participants are subjected to a battery of tests including tissue biopsies, physiologic testing, clinical history reporting, and CT scanning of the chest. The LTRC is a repository from which investigators can request tissue specimens and test results as well as semi-quantitative radiology reports, pathology reports, and automated quantitative image analysis results from the CT scan data performed by the LTRC core laboratories. The LTRC Radiology Core Laboratory (RCL), in conjunction with the Biomedical Imaging Resource (BIR), has developed novel processing methods for comprehensive characterization of pulmonary processes on volumetric high-resolution CT scans to quantify how these diseases manifest in radiographic images. Specifically, the RCL has implemented a semi-automated method for segmenting the anatomical regions of the lungs and airways. In these anatomic regions, automated quantification of pathologic features of disease including emphysema volumes and tissue classification are performed using both threshold techniques and advanced texture measures to determine the extent and location of emphysema, ground glass opacities, "honeycombing" (HC) and "irregular linear" or "reticular" pulmonary infiltrates and normal lung. Wall thickness measurements of the trachea, and its branches to the 3 rd and limited 4 th order are also computed. The methods for processing, segmentation and quantification are described. The results are reviewed and verified by an expert radiologist following processing and stored in the public LTRC database for use by pulmonary researchers. To date, over 1200 CT scans have been processed by the RCL and the LTRC project is on target for recruitment of the 2200 patients with 1800 CT scans in the repository for the 5-year effort. Ongoing analysis of the results in the LTRC database by the LTRC participating institutions and outside investigators are underway to look at the clinical and physiological significance of the imaging features of these diseases and correlate these findings with quality of life and other important prognostic indicators of severity. In the future, the quantitative measures of disease may have greater utility by showing correlation with prognosis, disease severity and other physiological parameters. These imaging features may provide non-invasive alternative endpoints or surrogate markers to alleviate the need for tissue biopsy or provide an accurate means to monitor rate of disease progression or response to therapy.
Caputo, Sandrine; Benboudjema, Louisa; Sinilnikova, Olga; Rouleau, Etienne; Béroud, Christophe; Lidereau, Rosette
2012-01-01
BRCA1 and BRCA2 are the two main genes responsible for predisposition to breast and ovarian cancers, as a result of protein-inactivating monoallelic mutations. It remains to be established whether many of the variants identified in these two genes, so-called unclassified/unknown variants (UVs), contribute to the disease phenotype or are simply neutral variants (or polymorphisms). Given the clinical importance of establishing their status, a nationwide effort to annotate these UVs was launched by laboratories belonging to the French GGC consortium (Groupe Génétique et Cancer), leading to the creation of the UMD-BRCA1/BRCA2 databases (http://www.umd.be/BRCA1/ and http://www.umd.be/BRCA2/). These databases have been endorsed by the French National Cancer Institute (INCa) and are designed to collect all variants detected in France, whether causal, neutral or UV. They differ from other BRCA databases in that they contain co-occurrence data for all variants. Using these data, the GGC French consortium has been able to classify certain UVs also contained in other databases. In this article, we report some novel UVs not contained in the BIC database and explore their impact in cancer predisposition based on a structural approach.
Mishra, Amrita
2014-01-01
Abstract Omics research infrastructure such as databases and bio-repositories requires effective governance to support pre-competitive research. Governance includes the use of legal agreements, such as Material Transfer Agreements (MTAs). We analyze the use of such agreements in the mouse research commons, including by two large-scale resource development projects: the International Knockout Mouse Consortium (IKMC) and International Mouse Phenotyping Consortium (IMPC). We combine an analysis of legal agreements and semi-structured interviews with 87 members of the mouse model research community to examine legal agreements in four contexts: (1) between researchers; (2) deposit into repositories; (3) distribution by repositories; and (4) exchanges between repositories, especially those that are consortium members of the IKMC and IMPC. We conclude that legal agreements for the deposit and distribution of research reagents should be kept as simple and standard as possible, especially when minimal enforcement capacity and resources exist. Simple and standardized legal agreements reduce transactional bottlenecks and facilitate the creation of a vibrant and sustainable research commons, supported by repositories and databases. PMID:24552652
[Development of a digital chest phantom for studies on energy subtraction techniques].
Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio
2014-03-01
Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
NASA Astrophysics Data System (ADS)
Chambers, Kenneth; Pan-STARRS Team
2018-01-01
The Pan-STARRS1 Surveys are complete and the first data release, DR1, is available from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The data include a database of measured attributes of 3 billion objects, stacked images, and metadata of the 3pi Steradian Survey. The DR1 contains all stationary objects with mean and stack photometry registered on the GAIA astrometric frame. DR2 is in preparation and will be released this winter with all the individual epoch images and time domain photometry and forced photometry on the individual epoch images. The characteristics of the Pan-STARRS1 Surveys will be presented, including image quality, depth, cadence, and coverage. Measured attributes include PSF model magnitudes, aperture magnitudes, Kron Magnitudes, radial moments, Petrosian magnitudes, DeVaucoulers, Exponential, and Sersic magnitudes for extended objects. Images include total intensity, variance, and masks.An overview of the Pan-STARRS1 Surveys and data releases will be presented together with a brief description of the data collected since the end of the PS1 Science Consortium surveys, and the plans for the upcoming survey with PS1 and PS2 begining in February 2018.
The Lung Image Database Consortium (LIDC): Ensuring the integrity of expert-defined “truth”
Armato, Samuel G.; Roberts, Rachael Y.; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; McLennan, Geoffrey; Engelmann, Roger M.; Bland, Peyton H.; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; Croft, Barbara Y.; Clarke, Laurence P.
2007-01-01
Rationale and Objectives Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish “truth” for algorithm development, training, and testing. The integrity of this “truth,” however, must be established before investigators commit to this “gold standard” as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the “truth” collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. Materials and Methods One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the “blinded read phase”), radiologists independently identified and annotated lesions, assigning each to one of three categories: “nodule ≥ 3mm,” “nodule < 3mm,” or “non-nodule ≥ 3mm.” For the second read (the “unblinded read phase”), the same radiologists independently evaluated the same CT scans but with all of the annotations from the previously performed blinded reads presented; each radiologist could add marks, edit or delete their own marks, change the lesion category of their own marks, or leave their marks unchanged. The post-unblinded-read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of (1) identification of potential errors introduced during the complete image annotation process (such as two marks on what appears to be a single lesion or an incomplete nodule contour) and (2) correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. Results A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. Conclusion The establishment of “truth” must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems. PMID:18035275
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Call for participation in the neurogenetics consortium within the Human Variome Project.
Haworth, Andrea; Bertram, Lars; Carrera, Paola; Elson, Joanna L; Braastad, Corey D; Cox, Diane W; Cruts, Marc; den Dunnen, Johann T; Farrer, Matthew J; Fink, John K; Hamed, Sherifa A; Houlden, Henry; Johnson, Dennis R; Nuytemans, Karen; Palau, Francesc; Rayan, Dipa L Raja; Robinson, Peter N; Salas, Antonio; Schüle, Birgitt; Sweeney, Mary G; Woods, Michael O; Amigo, Jorge; Cotton, Richard G H; Sobrido, Maria-Jesus
2011-08-01
The rate of DNA variation discovery has accelerated the need to collate, store and interpret the data in a standardised coherent way and is becoming a critical step in maximising the impact of discovery on the understanding and treatment of human disease. This particularly applies to the field of neurology as neurological function is impaired in many human disorders. Furthermore, the field of neurogenetics has been proven to show remarkably complex genotype-to-phenotype relationships. To facilitate the collection of DNA sequence variation pertaining to neurogenetic disorders, we have initiated the "Neurogenetics Consortium" under the umbrella of the Human Variome Project. The Consortium's founding group consisted of basic researchers, clinicians, informaticians and database creators. This report outlines the strategic aims established at the preliminary meetings of the Neurogenetics Consortium and calls for the involvement of the wider neurogenetic community in enabling the development of this important resource.
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction
NASA Technical Reports Server (NTRS)
Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan
2004-01-01
This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.
NASA Astrophysics Data System (ADS)
Machado, A. E.; Scharfen, G. R.; Barry, R. G.; Khalsa, S. S.; Raup, B.; Swick, R.; Troisi, V. J.; Wang, I.
2001-12-01
GLIMS (Global Land Ice Measurements from Space) is an international project to survey a majority of the world's glaciers with the accuracy and precision needed to assess recent changes and determine trends in glacial environments. This will be accomplished by: comprehensive periodic satellite measurements, coordinated distribution of screened image data, analysis of images at worldwide Regional Centers, validation of analyses, and a publicly accessible database. The primary data source will be from the ASTER (Advanced Spaceborne Thermal Emission and reflection Radiometer) instrument aboard the EOS Terra spacecraft, and Landsat ETM+ (Enhanced Thematic Mapper Plus), currently in operation. Approximately 700 ASTER images have been acquired with GLIMS gain settings as of mid-2001. GLIMS is a collaborative effort with the United States Geological Survey (USGS), the National Aeronautics Space Adminstration (NASA), other U.S. Federal Agencies and a group of internationally distributed glaciologists at Regional Centers of expertise. The National Snow and Ice Data Center (NSIDC) is developing the information management system for GLIMS. We will ingest and maintain GLIMS-analyzed glacier data from Regional Centers and provide access to the data via the World Wide Web. The GLIMS database will include measurements (over time) of glacier length, area, boundaries, topography, surface velocity vectors, and snowline elevation, derived primarily from remote sensing data. The GLIMS information management system at NSIDC will provide an easy to use and widely accessible service for the glaciological community and other users needing information about the world's glaciers. The structure of the international GLIMS consortium, status of database development, sample imagery and derived analyses and user search and order interfaces will be demonstrated. More information on GLIMS is available at: http://www.glims.org/.
Footprint Representation of Planetary Remote Sensing Data
NASA Astrophysics Data System (ADS)
Walter, S. H. G.; Gasselt, S. V.; Michael, G.; Neukum, G.
The geometric outline of remote sensing image data, the so called footprint, can be represented as a number of coordinate tuples. These polygons are associated with according attribute information such as orbit name, ground- and image resolution, solar longitude and illumination conditions to generate a powerful base for classification of planetary experiment data. Speed, handling and extended capabilites are the reasons for using geodatabases to store and access these data types. Techniques for such a spatial database of footprint data are demonstrated using the Relational Database Management System (RDBMS) PostgreSQL, spatially enabled by the PostGIS extension. Exemplary, footprints of the HRSC and OMEGA instruments, both onboard ESA's Mars Express Orbiter, are generated and connected to attribute information. The aim is to provide high-resolution footprints of the OMEGA instrument to the science community for the first time and make them available for web-based mapping applications like the "Planetary Interactive GIS-on-the-Web Analyzable Database" (PIG- WAD), produced by the USGS. Map overlays with HRSC or other instruments like MOC and THEMIS (footprint maps are already available for these instruments and can be integrated into the database) allow on-the-fly intersection and comparison as well as extended statistics of the data. Footprint polygons are generated one by one using standard software provided by the instrument teams. Attribute data is calculated and stored together with the geometric information. In the case of HRSC, the coordinates of the footprints are already available in the VICAR label of each image file. Using the VICAR RTL and PostgreSQL's libpq C library they are loaded into the database using the Well-Known Text (WKT) notation by the Open Geospatial Consortium, Inc. (OGC). For the OMEGA instrument, image data is read using IDL routines developed and distributed by the OMEGA team. Image outlines are exported together with relevant attribute data to the industry standard Shapefile format. These files are translated to a Structured Query Language (SQL) command sequence suitable for insertion into the PostGIS/PostgrSQL database using the shp2pgsql data loader provided by the PostGIS software. PostgreSQL's advanced features such as geometry types, rules, operators and functions allow complex spatial queries and on-the-fly processing of data on DBMS level e.g. generalisation of the outlines. Processing done by the DBMS, visualisation via GIS systems and utilisation for web-based applications like mapservers will be demonstrated.
Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V
2016-10-01
The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
The Cardiac Safety Research Consortium ECG database.
Kligfield, Paul; Green, Cynthia L
2012-01-01
The Cardiac Safety Research Consortium (CSRC) ECG database was initiated to foster research using anonymized, XML-formatted, digitized ECGs with corresponding descriptive variables from placebo- and positive-control arms of thorough QT studies submitted to the US Food and Drug Administration (FDA) by pharmaceutical sponsors. The database can be expanded to other data that are submitted directly to CSRC from other sources, and currently includes digitized ECGs from patients with genotyped varieties of congenital long-QT syndrome; this congenital long-QT database is also linked to ambulatory electrocardiograms stored in the Telemetric and Holter ECG Warehouse (THEW). Thorough QT data sets are available from CSRC for unblinded development of algorithms for analysis of repolarization and for blinded comparative testing of algorithms developed for the identification of moxifloxacin, as used as a positive control in thorough QT studies. Policies and procedures for access to these data sets are available from CSRC, which has developed tools for statistical analysis of blinded new algorithm performance. A recently approved CSRC project will create a data set for blinded analysis of automated ECG interval measurements, whose initial focus will include comparison of four of the major manufacturers of automated electrocardiographs in the United States. CSRC welcomes application for use of the ECG database for clinical investigation. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.
2017-12-01
Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.
Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number ofmore » features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.« less
Large scale validation of the M5L lung CAD on heterogeneous CT datasets.
Torres, E Lopez; Fiorina, E; Pennazio, F; Peroni, C; Saletta, M; Camarlinghi, N; Fantacci, M E; Cerello, P
2015-04-01
M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.
Types of Seizures Affecting Individuals with TSC
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
LungMAP: The Molecular Atlas of Lung Development Program
Ardini-Poleske, Maryanne E.; Ansong, Charles; Carson, James P.; Corley, Richard A.; Deutsch, Gail H.; Hagood, James S.; Kaminski, Naftali; Mariani, Thomas J.; Potter, Steven S.; Pryhuber, Gloria S.; Warburton, David; Whitsett, Jeffrey A.; Palmer, Scott M.; Ambalavanan, Namasivayam
2017-01-01
The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. PMID:28798251
Homer, Collin G.; Dewitz, Jon; Yang, Limin; Jin, Suming; Danielson, Patrick; Xian, George Z.; Coulston, John; Herold, Nathaniel; Wickham, James; Megown, Kevin
2015-01-01
The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associated changes. The recent release of NLCD 2011 products now represents a decade of consistently produced land cover and impervious surface for the Nation across three periods: 2001, 2006, and 2011 (Homer et al., 2007; Fry et al., 2011). Tree canopy cover has also been produced for 2011 (Coluston et al., 2012; Coluston et al., 2013). With the release of NLCD 2011, the database provides the ability to move beyond simple change detection to monitoring and trend assessments. NLCD 2011 represents the latest evolution of NLCD products, continuing its focus on consistency, production, efficiency, and product accuracy. NLCD products are designed for widespread application in biology, climate, education, land management, hydrology, environmental planning, risk and disease analysis, telecommunications and visualization, and are available for no cost at http://www.mrlc.gov. NLCD is produced by a Federal agency consortium called the Multi-Resolution Land Characteristics Consortium (MRLC) (Wickham et al., 2014). In the consortium arrangement, the U.S. Geological Survey (USGS) leads NLCD land cover and imperviousness production for the bulk of the Nation; the National Oceanic and Atmospheric Administration (NOAA) completes NLCD land cover for the conterminous U.S. (CONUS) coastal zones; and the U.S. Forest Service (USFS) designs and produces the NLCD tree canopy cover product. Other MRLC partners collaborate through resource or data contribution to ensure NLCD products meet their respective program needs (Wickham et al., 2014).
ERIC Educational Resources Information Center
Kreie, Jennifer; Hashemi, Shohreh
2012-01-01
Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…
A 30-meter spatial database for the nation's forests
Raymond L. Czaplewski
2002-01-01
The FIA vision for remote sensing originated in 1992 with the Blue Ribbon Panel on FIA, and it has since evolved into an ambitious performance target for 2003. FIA is joining a consortium of Federal agencies to map the Nation's land cover. FIA field data will help produce a seamless, standardized, national geospatial database for forests at the scale of 30-m...
Moscucci, Mauro; Share, David; Kline-Rogers, Eva; O'Donnell, Michael; Maxwell-Eward, Ann; Meengs, William L; Clark, Vivian L; Kraft, Phillip; De Franco, Anthony C; Chambers, James L; Patel, Kirit; McGinnity, John G; Eagle, Kim A
2002-10-01
The past decade has been characterized by increased scrutiny of outcomes of surgical and percutaneous coronary interventions (PCIs). This increased scrutiny has led to the development of regional, state, and national databases for outcome assessment and for public reporting. This report describes the initial development of a regional, collaborative, cardiovascular consortium and the progress made so far by this collaborative group. In 1997, a group of hospitals in the state Michigan agreed to create a regional collaborative consortium for the development of a quality improvement program in interventional cardiology. The project included the creation of a comprehensive database of PCIs to be used for risk assessment, feedback on absolute and risk-adjusted outcomes, and sharing of information. To date, information from nearly 20,000 PCIs have been collected. A risk prediction tool for death in the hospital and additional risk prediction tools for other outcomes have been developed from the data collected, and are currently used by the participating centers for risk assessment and for quality improvement. As the project enters into year 5, the participating centers are deeply engaged in the quality improvement phase, and expansion to a total of 17 hospitals with active PCI programs is in process. In conclusion, the Blue Cross Blue Shield of Michigan Cardiovascular Consortium is an example of a regional collaborative effort to assess and improve quality of care and outcomes that overcome the barriers of traditional market and academic competition.
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
National Maternal and Child Oral Health Resource Center
... the Organizations Database Center for Oral Health Systems Integration and Improvement (COHSII) COHSII is a consortium promoting ... to e-mail lists Featured Resources Consensus Statement Integration Framework Bright Futures Pocket Guide Consumer Materials Special ...
Construction of 3-D Earth Models for Station Specific Path Corrections by Dynamic Ray Tracing
2001-10-01
the numerical eikonal solution method of Vidale (1988) being used by the MIT led consortium. The model construction described in this report relies...assembled. REFERENCES Barazangi, M., Fielding, E., Isacks, B. & Seber, D., (1996), Geophysical And Geological Databases And Ctbt...preprint download6). Fielding, E., Isacks, B.L., and Baragangi. M. (1992), A Network Accessible Geological and Geophysical Database for
Improvements to the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Tauxe, L.; Koppers, A. A. P.; Constable, C.; Jonestrask, L.
2015-12-01
The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of data uploading and editing, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Online data editing is now available and the need for proprietary spreadsheet software is therefore entirely negated. The data owner can change values in the database or delete entries through an HTML 5 web interface that resembles typical spreadsheets in behavior and uses. Additive uploading now allows for additions to data sets to be uploaded with a simple drag and drop interface. Searching the database has improved with the addition of more sophisticated search parameters and with the facility to use them in complex combinations. A comprehensive summary view of a search result has been added for increased quick data comprehension while a raw data view is available if one desires to see all data columns as stored in the database. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. MagIC data associated with individual contributions or from online searches may be downloaded in the tab delimited MagIC text file format for susbsequent offline use and analysis. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.
A New Interface for the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Koppers, A. A. P.; Tauxe, L.; Constable, C.; Shaar, R.; Jonestrask, L.
2014-12-01
The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of uploading data, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Data uploading has been simplified and no longer requires the use of the Excel SmartBook interface. Instead, properly formatted MagIC text files can be dragged-and-dropped onto an HTML 5 web interface. Data can be uploaded one table at a time to facilitate ease of uploading and data error checking is done online on the whole dataset at once instead of incrementally in an Excel Console. Searching the database has improved with the addition of more sophisticated search parameters and with the ability to use them in complex combinations. Searches may also be saved as permanent URLs for easy reference or for use as a citation in a publication. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. Data from the MagIC database may be downloaded from individual contributions or from online searches for offline use and analysis in the tab delimited MagIC text file format. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.
Kamitsuji, Shigeo; Matsuda, Takashi; Nishimura, Koichi; Endo, Seiko; Wada, Chisa; Watanabe, Kenji; Hasegawa, Koichi; Hishigaki, Haretsugu; Masuda, Masatoshi; Kuwahara, Yusuke; Tsuritani, Katsuki; Sugiura, Kenkichi; Kubota, Tomoko; Miyoshi, Shinji; Okada, Kinya; Nakazono, Kazuyuki; Sugaya, Yuki; Yang, Woosung; Sawamoto, Taiji; Uchida, Wataru; Shinagawa, Akira; Fujiwara, Tsutomu; Yamada, Hisaharu; Suematsu, Koji; Tsutsui, Naohisa; Kamatani, Naoyuki; Liou, Shyh-Yuh
2015-06-01
Japan Pharmacogenomics Data Science Consortium (JPDSC) has assembled a database for conducting pharmacogenomics (PGx) studies in Japanese subjects. The database contains the genotypes of 2.5 million single-nucleotide polymorphisms (SNPs) and 5 human leukocyte antigen loci from 2994 Japanese healthy volunteers, as well as 121 kinds of clinical information, including self-reports, physiological data, hematological data and biochemical data. In this article, the reliability of our data was evaluated by principal component analysis (PCA) and association analysis for hematological and biochemical traits by using genome-wide SNP data. PCA of the SNPs showed that all the samples were collected from the Japanese population and that the samples were separated into two major clusters by birthplace, Okinawa and other than Okinawa, as had been previously reported. Among 87 SNPs that have been reported to be associated with 18 hematological and biochemical traits in genome-wide association studies (GWAS), the associations of 56 SNPs were replicated using our data base. Statistical power simulations showed that the sample size of the JPDSC control database is large enough to detect genetic markers having a relatively strong association even when the case sample size is small. The JPDSC database will be useful as control data for conducting PGx studies to explore genetic markers to improve the safety and efficacy of drugs either during clinical development or in post-marketing.
A new 3D texture feature based computer-aided diagnosis approach to differentiate pulmonary nodules
NASA Astrophysics Data System (ADS)
Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Zhao, Hong; Liang, Zhengrong
2013-02-01
To distinguish malignant pulmonary nodules from benign ones is of much importance in computer-aided diagnosis of lung diseases. Compared to many previous methods which are based on shape or growth assessing of nodules, this proposed three-dimensional (3D) texture feature based approach extracted fifty kinds of 3D textural features from gray level, gradient and curvature co-occurrence matrix, and more derivatives of the volume data of the nodules. To evaluate the presented approach, the Lung Image Database Consortium public database was downloaded. Each case of the database contains an annotation file, which indicates the diagnosis results from up to four radiologists. In order to relieve partial-volume effect, interpolation process was carried out to those volume data with image slice thickness more than 1mm, and thus we had categorized the downloaded datasets to five groups to validate the proposed approach, one group of thickness less than 1mm, two types of thickness range from 1mm to 1.25mm and greater than 1.25mm (each type contains two groups, one with interpolation and the other without). Since support vector machine is based on statistical learning theory and aims to learn for predicting future data, so it was chosen as the classifier to perform the differentiation task. The measure on the performance was based on the area under the curve (AUC) of Receiver Operating Characteristics. From 284 nodules (122 malignant and 162 benign ones), the validation experiments reported a mean of 0.9051 and standard deviation of 0.0397 for the AUC value on average over 100 randomizations.
Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato
2011-04-01
The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.
Bonnie Ruefenacht; Robert Benton; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley
2015-01-01
A tree canopy cover (TCC) layer is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...
LungMAP: The Molecular Atlas of Lung Development Program.
Ardini-Poleske, Maryanne E; Clark, Robert F; Ansong, Charles; Carson, James P; Corley, Richard A; Deutsch, Gail H; Hagood, James S; Kaminski, Naftali; Mariani, Thomas J; Potter, Steven S; Pryhuber, Gloria S; Warburton, David; Whitsett, Jeffrey A; Palmer, Scott M; Ambalavanan, Namasivayam
2017-11-01
The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. Copyright © 2017 the American Physiological Society.
Filmless PACS in a multiple facility environment
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.
1996-05-01
A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.
RNAcentral: an international database of ncRNA sequences
Williams, Kelly Porter
2014-10-28
The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.
Sun, Wenqing; Zheng, Bin; Qian, Wei
2017-10-01
This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.
Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.
Wu, Panpan; Xia, Kewen; Yu, Hengyong
2016-11-01
Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao
2009-06-01
The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.
1988-01-01
MONITORING ORGANIZATION Northeast Artificial (If applicaole)nelincCostum(AcRome Air Development Center (COCU) Inteligence Consortium (NAIC)I 6c. ADDRESS...f, Offell RADC-TR-88-1 1, Vol IV (of eight) Interim Technical ReportS June 1988 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1986...13441-5700 EMENT NO NO NO ACCESSION NO62702F 5 8 71 " " over) I 58 27 13 " TITLE (Include Security Classification) NORTHEAST ARTIFICIAL INTELLIGENCE
Array Databases: Agile Analytics (not just) for the Earth Sciences
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2015-12-01
Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.
OPAC Missing Record Retrieval.
ERIC Educational Resources Information Center
Johnson, Karl E.
1996-01-01
When the Higher Education Library Information Network of Rhode Island transferred members' bibliographic data into a shared online public access catalog (OPAC), 10% of the University of Rhode Island's monograph records were missing. This article describes the consortium's attempts to retrieve records from the database and the effectiveness of…
Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography
NASA Astrophysics Data System (ADS)
Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin
1995-04-01
Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.
Rationale of the FIBROTARGETS study designed to identify novel biomarkers of myocardial fibrosis
Ferreira, João Pedro; Machu, Jean‐Loup; Girerd, Nicolas; Jaisser, Frederic; Thum, Thomas; Butler, Javed; González, Arantxa; Diez, Javier; Heymans, Stephane; McDonald, Kenneth; Gyöngyösi, Mariann; Firat, Hueseyin; Rossignol, Patrick; Pizard, Anne
2017-01-01
Abstract Aims Myocardial fibrosis alters the cardiac architecture favouring the development of cardiac dysfunction, including arrhythmias and heart failure. Reducing myocardial fibrosis may improve outcomes through the targeted diagnosis and treatment of emerging fibrotic pathways. The European‐Commission‐funded ‘FIBROTARGETS’ is a multinational academic and industrial consortium with the main aims of (i) characterizing novel key mechanistic pathways involved in the metabolism of fibrillary collagen that may serve as biotargets, (ii) evaluating the potential anti‐fibrotic properties of novel or repurposed molecules interfering with the newly identified biotargets, and (iii) characterizing bioprofiles based on distinct mechanistic phenotypes involving the aforementioned biotargets. These pathways will be explored by performing a systematic and collaborative search for mechanisms and targets of myocardial fibrosis. These mechanisms will then be translated into individualized diagnostic tools and specific therapeutic pharmacological options for heart failure. Methods and results The FIBROTARGETS consortium has merged data from 12 patient cohorts in a common database available to individual consortium partners. The database consists of >12 000 patients with a large spectrum of cardiovascular clinical phenotypes. It integrates community‐based population cohorts, cardiovascular risk cohorts, and heart failure cohorts. Conclusions The FIBROTARGETS biomarker programme is aimed at exploring fibrotic pathways allowing the bioprofiling of patients into specific ‘fibrotic’ phenotypes and identifying new therapeutic targets that will potentially enable the development of novel and tailored anti‐fibrotic therapies for heart failure. PMID:28988439
Computerized lung cancer malignancy level analysis using 3D texture features
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei
2016-03-01
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
A Four-Dimensional Probabilistic Atlas of the Human Brain
Mazziotta, John; Toga, Arthur; Evans, Alan; Fox, Peter; Lancaster, Jack; Zilles, Karl; Woods, Roger; Paus, Tomas; Simpson, Gregory; Pike, Bruce; Holmes, Colin; Collins, Louis; Thompson, Paul; MacDonald, David; Iacoboni, Marco; Schormann, Thorsten; Amunts, Katrin; Palomero-Gallagher, Nicola; Geyer, Stefan; Parsons, Larry; Narr, Katherine; Kabani, Noor; Le Goualher, Georges; Feidler, Jordan; Smith, Kenneth; Boomsma, Dorret; Pol, Hilleke Hulshoff; Cannon, Tyrone; Kawashima, Ryuta; Mazoyer, Bernard
2001-01-01
The authors describe the development of a four-dimensional atlas and reference system that includes both macroscopic and microscopic information on structure and function of the human brain in persons between the ages of 18 and 90 years. Given the presumed large but previously unquantified degree of structural and functional variance among normal persons in the human population, the basis for this atlas and reference system is probabilistic. Through the efforts of the International Consortium for Brain Mapping (ICBM), 7,000 subjects will be included in the initial phase of database and atlas development. For each subject, detailed demographic, clinical, behavioral, and imaging information is being collected. In addition, 5,800 subjects will contribute DNA for the purpose of determining genotype– phenotype–behavioral correlations. The process of developing the strategies, algorithms, data collection methods, validation approaches, database structures, and distribution of results is described in this report. Examples of applications of the approach are described for the normal brain in both adults and children as well as in patients with schizophrenia. This project should provide new insights into the relationship between microscopic and macroscopic structure and function in the human brain and should have important implications in basic neuroscience, clinical diagnostics, and cerebral disorders. PMID:11522763
Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas
2010-06-28
Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.
Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D
2016-04-01
The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.
Stanhope, Steven J; Wilken, Jason M; Pruziner, Alison L; Dearth, Christopher L; Wyatt, Marilynn; Ziemke, Gregg W; Strickland, Rachel; Milbourne, Suzanne A; Kaufman, Kenton R
2016-11-01
The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators.BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium's research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER's Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium's vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Distributed Access View Integrated Database (DAVID) system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1991-01-01
The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.
Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention
[[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso
78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-27
...] Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments AGENCY: Food and...: The Food and Drug Administration (FDA) and cosponsor International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in Medical Imaging: An International...
The CTSA Consortium's Catalog of Assets for Translational and Clinical Health Research (CATCHR)
Mapes, Brandy; Basford, Melissa; Zufelt, Anneliese; Wehbe, Firas; Harris, Paul; Alcorn, Michael; Allen, David; Arnim, Margaret; Autry, Susan; Briggs, Michael S.; Carnegie, Andrea; Chavis‐Keeling, Deborah; De La Pena, Carlos; Dworschak, Doris; Earnest, Julie; Grieb, Terri; Guess, Marilyn; Hafer, Nathaniel; Johnson, Tesheia; Kasper, Amanda; Kopp, Janice; Lockie, Timothy; Lombardo, Vincetta; McHale, Leslie; Minogue, Andrea; Nunnally, Beth; O'Quinn, Deanna; Peck, Kelly; Pemberton, Kieran; Perry, Cheryl; Petrie, Ginny; Pontello, Andria; Posner, Rachel; Rehman, Bushra; Roth, Deborah; Sacksteder, Paulette; Scahill, Samantha; Schieri, Lorri; Simpson, Rosemary; Skinner, Anne; Toussant, Kim; Turner, Alicia; Van der Put, Elaine; Wasser, June; Webb, Chris D.; Williams, Maija; Wiseman, Lori; Yasko, Laurel; Pulley, Jill
2014-01-01
Abstract The 61 CTSA Consortium sites are home to valuable programs and infrastructure supporting translational science and all are charged with ensuring that such investments translate quickly to improved clinical care. Catalog of Assets for Translational and Clinical Health Research (CATCHR) is the Consortium's effort to collect and make available information on programs and resources to maximize efficiency and facilitate collaborations. By capturing information on a broad range of assets supporting the entire clinical and translational research spectrum, CATCHR aims to provide the necessary infrastructure and processes to establish and maintain an open‐access, searchable database of consortium resources to support multisite clinical and translational research studies. Data are collected using rigorous, defined methods, with the resulting information made visible through an integrated, searchable Web‐based tool. Additional easy‐to‐use Web tools assist resource owners in validating and updating resource information over time. In this paper, we discuss the design and scope of the project, data collection methods, current results, and future plans for development and sustainability. With increasing pressure on research programs to avoid redundancy, CATCHR aims to make available information on programs and core facilities to maximize efficient use of resources. PMID:24456567
Enhancing Transfer Effectiveness: A Model for the 1990s.
ERIC Educational Resources Information Center
Berman, Paul; And Others
In an effort to identify effective transfer practices appropriate to different community college circumstances, and to establish a quantitative database that would enable valid comparisons of transfer between their 28 member institutions, the National Effective Transfer Consortium (NETC) sponsored a survey of more than 30,000 students attending…
The ICPSR and Social Science Research
ERIC Educational Resources Information Center
Johnson, Wendell G.
2008-01-01
The Inter-university Consortium for Political and Social Research (ICPSR), a unit within the Institute for Social Research at the University of Michigan, is the world's largest social science data archive. The data sets in the ICPRS database give the social sciences librarian/subject specialist an opportunity of providing value-added bibliographic…
The study is a consortium between the U.S. Environmental Protection Agency (National Risk Management Research Laboratory) and the U.S. Geological Survey (Baltimore and Dover). The objectives of this study are: (1) to develop a geohydrological database for paired agricultural wate...
Consortial IT Services: Collaborating To Reduce the Pain.
ERIC Educational Resources Information Center
Klonoski, Ed
The Connecticut Distance Learning Consortium (CTDLC) provides its 32 members with Information Technologies (IT) services including a portal Web site, course management software, course hosting and development, faculty training, a help desk, online assessment, and a student financial aid database. These services are supplied to two- and four-year…
The CNES Gaia Data Processing Center: A Challenge and its Solutions
NASA Astrophysics Data System (ADS)
Chaoul, Laurence; Valette, Veronique
2011-08-01
After a brief reminder of the ESA Gaia project, this paper presents the data processing consortium (DPAC) and then the CNES data processing centre (DPCC). We focus on the challenge in terms of organisational aspects, processing capabilities, databases volumetry, and how we deal with these topics.
Distributed databases for materials study of thermo-kinetic properties
NASA Astrophysics Data System (ADS)
Toher, Cormac
2015-03-01
High-throughput computational materials science provides researchers with the opportunity to rapidly generate large databases of materials properties. To rapidly add thermal properties to the AFLOWLIB consortium and Materials Project repositories, we have implemented an automated quasi-harmonic Debye model, the Automatic GIBBS Library (AGL). This enables us to screen thousands of materials for thermal conductivity, bulk modulus, thermal expansion and related properties. The search and sort functions of the online database can then be used to identify suitable materials for more in-depth study using more precise computational or experimental techniques. AFLOW-AGL source code is public domain and will soon be released within the GNU-GPL license.
Interoperability in planetary research for geospatial data analysis
NASA Astrophysics Data System (ADS)
Hare, Trent M.; Rossi, Angelo P.; Frigeri, Alessandro; Marmo, Chiara
2018-01-01
For more than a decade there has been a push in the planetary science community to support interoperable methods for accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (e.g., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized geospatial image formats, geologic mapping conventions, U.S. Federal Geographic Data Committee (FGDC) cartographic and metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Map Tile Services (cached image tiles), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they can be just as valuable for planetary domain. Another initiative, called VESPA (Virtual European Solar and Planetary Access), will marry several of the above geoscience standards and astronomy-based standards as defined by International Virtual Observatory Alliance (IVOA). This work outlines the current state of interoperability initiatives in use or in the process of being researched within the planetary geospatial community.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.
2014-12-01
Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.
NASA Astrophysics Data System (ADS)
Heynderickx, Daniel
2012-07-01
The main objective of the SEPServer project (EU FP7 project 262773) is to produce a new tool, which greatly facilitates the investigation of solar energetic particles (SEPs) and their origin: a server providing SEP data, related electromagnetic (EM) observations and analysis methods, a comprehensive catalogue of the observed SEP events, and educational/outreach material on solar eruptions. The project is coordinated by the University of Helsinki. The project will combine data and knowledge from 11 European partners and several collaborating parties from Europe and US. The datasets provided by the consortium partners are collected in a MySQL database (using the ESA Open Data Interface under licence) on a server operated by DH Consultancy, which also hosts a web interface providing browsing, plotting and post-processing and analysis tools developed by the consortium, as well as a Solar Energetic Particle event catalogue. At this stage of the project, a prototype server has been established, which is presently undergoing testing by users inside the consortium. Using a centralized database has numerous advantages, including: homogeneous storage of the data, which eliminates the need for dataset specific file access routines once the data are ingested in the database; a homogeneous set of metadata describing the datasets on both a global and detailed level, allowing for automated access to and presentation of the various data products; standardised access to the data in different programming environments (e.g. php, IDL); elimination of the need to download data for individual data requests. SEPServer will, thus, add value to several space missions and Earth-based observations by facilitating the coordinated exploitation of and open access to SEP data and related EM observations, and promoting correct use of these data for the entire space research community. This will lead to new knowledge on the production and transport of SEPs during solar eruptions and facilitate the development of models for predicting solar radiation storms and calculation of expected fluxes/fluences of SEPs encountered by spacecraft in the interplanetary medium.
Freschi, Luca; Jeukens, Julie; Kukavica-Ibrulj, Irena; Boyle, Brian; Dupont, Marie-Josée; Laroche, Jérôme; Larose, Stéphane; Maaroufi, Halim; Fothergill, Joanne L.; Moore, Matthew; Winsor, Geoffrey L.; Aaron, Shawn D.; Barbeau, Jean; Bell, Scott C.; Burns, Jane L.; Camara, Miguel; Cantin, André; Charette, Steve J.; Dewar, Ken; Déziel, Éric; Grimwood, Keith; Hancock, Robert E. W.; Harrison, Joe J.; Heeb, Stephan; Jelsbak, Lars; Jia, Baofeng; Kenna, Dervla T.; Kidd, Timothy J.; Klockgether, Jens; Lam, Joseph S.; Lamont, Iain L.; Lewenza, Shawn; Loman, Nick; Malouin, François; Manos, Jim; McArthur, Andrew G.; McKeown, Josie; Milot, Julie; Naghra, Hardeep; Nguyen, Dao; Pereira, Sheldon K.; Perron, Gabriel G.; Pirnay, Jean-Paul; Rainey, Paul B.; Rousseau, Simon; Santos, Pedro M.; Stephenson, Anne; Taylor, Véronique; Turton, Jane F.; Waglechner, Nicholas; Williams, Paul; Thrane, Sandra W.; Wright, Gerard D.; Brinkman, Fiona S. L.; Tucker, Nicholas P.; Tümmler, Burkhard; Winstanley, Craig; Levesque, Roger C.
2015-01-01
The International Pseudomonas aeruginosa Consortium is sequencing over 1000 genomes and building an analysis pipeline for the study of Pseudomonas genome evolution, antibiotic resistance and virulence genes. Metadata, including genomic and phenotypic data for each isolate of the collection, are available through the International Pseudomonas Consortium Database (http://ipcd.ibis.ulaval.ca/). Here, we present our strategy and the results that emerged from the analysis of the first 389 genomes. With as yet unmatched resolution, our results confirm that P. aeruginosa strains can be divided into three major groups that are further divided into subgroups, some not previously reported in the literature. We also provide the first snapshot of P. aeruginosa strain diversity with respect to antibiotic resistance. Our approach will allow us to draw potential links between environmental strains and those implicated in human and animal infections, understand how patients become infected and how the infection evolves over time as well as identify prognostic markers for better evidence-based decisions on patient care. PMID:26483767
... Argentina ADNI Amyloid Imaging Task Force Alzheimer’s Association Business Consortia (AABC) Biomarker Consortium GBSC Working Groups Global Alzheimer’s Association Interactive Network International Alzheimer's Disease Research ...
NASA Astrophysics Data System (ADS)
Huehnerhoff, Joseph; Ketzeback, William; Bradley, Alaina; Dembicky, Jack; Doughty, Caitlin; Hawley, Suzanne; Johnson, Courtney; Klaene, Mark; Leon, Ed; McMillan, Russet; Owen, Russell; Sayres, Conor; Sheen, Tyler; Shugart, Alysha
2016-08-01
The Astrophysical Research Consortium Telescope Imaging Camera, ARCTIC, is a new optical imaging camera now in use at the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). As a facility instrument, the design criteria broadly encompassed many current and future science opportunities, and the components were built for quick repair or replacement, to minimize down-time. Examples include a quick change shutter, filter drive components accessible from the exterior and redundant amplifiers on the detector. The detector is a Semiconductor Technology Associates (STA) device with several key properties (e.g. high quantum efficiency, low read-noise, quick readout, minimal fringing, operational bandpass 350-950nm). Focal reducing optics (f/10.3 to f/8.0) were built to control aberrations over a 7.8'x7.8' field, with a plate scale of 0.11" per 0.15 micron pixel. The instrument body and dewar were designed to be simple and robust with only two components to the structure forward of the dewar, which in turn has minimal feedthroughs and permeation areas and holds a vacuum <10-8 Torr. A custom shutter was also designed, using pneumatics as the driving force. This device provides exceptional performance and reduces heat near the optical path. Measured performance is repeatable at the 2ms level and offers field uniformity to the same level of precision. The ARCTIC facility imager will provide excellent science capability with robust operation and minimal maintenance for the next decade or more at APO.
Yohda, Masafumi; Yagi, Osami; Takechi, Ayane; Kitajima, Mizuki; Matsuda, Hisashi; Miyamura, Naoaki; Aizawa, Tomoko; Nakajima, Mutsuyasu; Sunairi, Michio; Daiba, Akito; Miyajima, Takashi; Teruya, Morimi; Teruya, Kuniko; Shiroma, Akino; Shimoji, Makiko; Tamotsu, Hinako; Juan, Ayaka; Nakano, Kazuma; Aoyama, Misako; Terabayashi, Yasunobu; Satou, Kazuhito; Hirano, Takashi
2015-07-01
A Dehalococcoides-containing bacterial consortium that performed dechlorination of 0.20 mM cis-1,2-dichloroethene to ethene in 14 days was obtained from the sediment mud of the lotus field. To obtain detailed information of the consortium, the metagenome was analyzed using the short-read next-generation sequencer SOLiD 3. Matching the obtained sequence tags with the reference genome sequences indicated that the Dehalococcoides sp. in the consortium was highly homologous to Dehalococcoides mccartyi CBDB1 and BAV1. Sequence comparison with the reference sequence constructed from 16S rRNA gene sequences in a public database showed the presence of Sedimentibacter, Sulfurospirillum, Clostridium, Desulfovibrio, Parabacteroides, Alistipes, Eubacterium, Peptostreptococcus and Proteocatella in addition to Dehalococcoides sp. After further enrichment, the members of the consortium were narrowed down to almost three species. Finally, the full-length circular genome sequence of the Dehalococcoides sp. in the consortium, D. mccartyi IBARAKI, was determined by analyzing the metagenome with the single-molecule DNA sequencer PacBio RS. The accuracy of the sequence was confirmed by matching it to the tag sequences obtained by SOLiD 3. The genome is 1,451,062 nt and the number of CDS is 1566, which includes 3 rRNA genes and 47 tRNA genes. There exist twenty-eight RDase genes that are accompanied by the genes for anchor proteins. The genome exhibits significant sequence identity with other Dehalococcoides spp. throughout the genome, but there exists significant difference in the distribution RDase genes. The combination of a short-read next-generation DNA sequencer and a long-read single-molecule DNA sequencer gives detailed information of a bacterial consortium. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
... Argentina ADNI Amyloid Imaging Task Force Alzheimer’s Association Business Consortia (AABC) Biomarker Consortium GBSC Working Groups Global Alzheimer’s Association Interactive Network International Alzheimer's Disease Research ...
... Argentina ADNI Amyloid Imaging Task Force Alzheimer’s Association Business Consortia (AABC) Biomarker Consortium GBSC Working Groups Global Alzheimer’s Association Interactive Network International Alzheimer's Disease Research ...
... Argentina ADNI Amyloid Imaging Task Force Alzheimer’s Association Business Consortia (AABC) Biomarker Consortium GBSC Working Groups Global Alzheimer’s Association Interactive Network International Alzheimer's Disease Research ...
NASA Astrophysics Data System (ADS)
Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.
2015-12-01
Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.
Das, Raima; Ghosh, Sankar Kumar
2017-04-01
DNA repair pathway is a primary defense system that eliminates wide varieties of DNA damage. Any deficiencies in them are likely to cause the chromosomal instability that leads to cell malfunctioning and tumorigenesis. Genetic polymorphisms in DNA repair genes have demonstrated a significant association with cancer risk. Our study attempts to give a glimpse of the overall scenario of the germline polymorphisms in the DNA repair genes by taking into account of the Exome Aggregation Consortium (ExAC) database as well as the Human Gene Mutation Database (HGMD) for evaluating the disease link, particularly in cancer. It has been found that ExAC DNA repair dataset (which consists of 228 DNA repair genes) comprises 30.4% missense, 12.5% dbSNP reported and 3.2% ClinVar significant variants. 27% of all the missense variants has the deleterious SIFT score of 0.00 and 6% variants carrying the most damaging Polyphen-2 score of 1.00, thus affecting the protein structure and function. However, as per HGMD, only a fraction (1.2%) of ExAC DNA repair variants was found to be cancer-related, indicating remaining variants reported in both the databases to be further analyzed. This, in turn, may provide an increased spectrum of the reported cancer linked variants in the DNA repair genes present in ExAC database. Moreover, further in silico functional assay of the identified vital cancer-associated variants, which is essential to get their actual biological significance, may shed some lights in the field of targeted drug development in near future. Copyright © 2017. Published by Elsevier B.V.
Bhutiani, Neal; Scoggins, Charles R; McMasters, Kelly M; Ethun, Cecilia G; Poultsides, George A; Pawlik, Timothy M; Weber, Sharon M; Schmidt, Carl R; Fields, Ryan C; Idrees, Kamran; Hatzaras, Ioannis; Shen, Perry; Maithel, Shishir K; Martin, Robert C G
2018-04-01
The objective of this study was to determine the impact of caudate resection on margin status and outcomes during resection of extrahepatic hilar cholangiocarcinoma. A database of 1,092 patients treated for biliary malignancies at institutions of the Extrahepatic Biliary Malignancy Consortium was queried for individuals undergoing curative-intent resection for extrahepatic hilar cholangiocarcinoma. Patients who did versus did not undergo concomitant caudate resection were compared with regard to demographic, baseline, and tumor characteristics as well as perioperative outcomes. A total of 241 patients underwent resection for a hilar cholangiocarcinoma, of whom 85 underwent caudate resection. Patients undergoing caudate resection were less likely to have a final positive margin (P = .01). Kaplan-Meier curve of overall survival for patients undergoing caudate resection indicated no improvement over patients not undergoing caudate resection (P = .16). On multivariable analysis, caudate resection was not associated with improved overall survival or recurrence-free survival, although lymph node positivity was associated with worse overall survival and recurrence-free survival, and adjuvant chemoradiotherapy was associated with improved overall survival and recurrence-free survival. Caudate resection is associated with a greater likelihood of margin-negative resection in patients with extrahepatic hilar cholangiocarcinoma. Precise preoperative imaging is critical to assess the extent of biliary involvement, so that all degrees of hepatic resections are possible at the time of the initial operation. Copyright © 2017 Elsevier Inc. All rights reserved.
Cancer Core Europe: a consortium to address the cancer care-cancer research continuum challenge.
Eggermont, Alexander M M; Caldas, Carlos; Ringborg, Ulrik; Medema, René; Tabernero, Josep; Wiestler, Otmar
2014-11-01
European cancer research for a transformative initiative by creating a consortium of six leading excellent comprehensive cancer centres that will work together to address the cancer care-cancer research continuum. Prerequisites for joint translational and clinical research programs are very demanding. These require the creation of a virtual single 'e-hospital' and a powerful translational platform, inter-compatible clinical molecular profiling laboratories with a robust underlying computational biology pipeline, standardised functional and molecular imaging, commonly agreed Standard Operating Procedures (SOPs) for liquid and tissue biopsy procurement, storage and processing, for molecular diagnostics, 'omics', functional genetics, immune-monitoring and other assessments. Importantly also it requires a culture of data collection and data storage that provides complete longitudinal data sets to allow for: effective data sharing and common database building, and to achieve a level of completeness of data that is required for conducting outcome research, taking into account our current understanding of cancers as communities of evolving clones. Cutting edge basic research and technology development serve as an important driving force for innovative translational and clinical studies. Given the excellent track records of the six participants in these areas, Cancer Core Europe will be able to support the full spectrum of research required to address the cancer research- cancer care continuum. Cancer Core Europe also constitutes a unique environment to train the next generation of talents in innovative translational and clinical oncology. Copyright © 2014. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jiahui; Engelmann, Roger; Li Qiang
2007-12-15
Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key 'spiral-scanning' technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the 'north pole' to the 'south pole'. Themore » voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the 'optimal' outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.« less
... Argentina ADNI Amyloid Imaging Task Force Alzheimer’s Association Business Consortia (AABC) Biomarker Consortium GBSC Working Groups Global Alzheimer’s Association Interactive Network International Alzheimer's Disease Research ...
Consortium for Imaging and Biomarkers (CIB) | Division of Cancer Prevention
Overdiagnosis and false positives present significant clinical problems in the prevention, detection and treatment of | 8 lead investigators combining imaging methods for the visualization of lesions with biomarkers to improve the accuracy of screening, early cancer detection, and the diagnosis of early stage cancers.
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-01-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards sub-fertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of ferti...
ERIC Educational Resources Information Center
Fast, Karl V.; Campbell, D. Grant
2001-01-01
Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…
Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT
Guo, Wei; Li, Qiang
2014-01-01
Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
NASA Technical Reports Server (NTRS)
Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie
2004-01-01
In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further details of the instrumentation used for insulator measurements and representative measurements of insulating spacecraft materials are provided in other Spacecraft Charging Conference presentations. The NASA Space Environments and Effects Program, the Air Force Office of Scientific Research, the Boeing Corporation, NASA Graduate Research Fellowships, and the NASA Rocky Mountain Space Grant Consortium have provided support.
Green, Robert C; Goddard, Katrina A B; Jarvik, Gail P; Amendola, Laura M; Appelbaum, Paul S; Berg, Jonathan S; Bernhardt, Barbara A; Biesecker, Leslie G; Biswas, Sawona; Blout, Carrie L; Bowling, Kevin M; Brothers, Kyle B; Burke, Wylie; Caga-Anan, Charlisse F; Chinnaiyan, Arul M; Chung, Wendy K; Clayton, Ellen W; Cooper, Gregory M; East, Kelly; Evans, James P; Fullerton, Stephanie M; Garraway, Levi A; Garrett, Jeremy R; Gray, Stacy W; Henderson, Gail E; Hindorff, Lucia A; Holm, Ingrid A; Lewis, Michelle Huckaby; Hutter, Carolyn M; Janne, Pasi A; Joffe, Steven; Kaufman, David; Knoppers, Bartha M; Koenig, Barbara A; Krantz, Ian D; Manolio, Teri A; McCullough, Laurence; McEwen, Jean; McGuire, Amy; Muzny, Donna; Myers, Richard M; Nickerson, Deborah A; Ou, Jeffrey; Parsons, Donald W; Petersen, Gloria M; Plon, Sharon E; Rehm, Heidi L; Roberts, J Scott; Robinson, Dan; Salama, Joseph S; Scollon, Sarah; Sharp, Richard R; Shirts, Brian; Spinner, Nancy B; Tabor, Holly K; Tarczy-Hornoch, Peter; Veenstra, David L; Wagle, Nikhil; Weck, Karen; Wilfond, Benjamin S; Wilhelmsen, Kirk; Wolf, Susan M; Wynn, Julia; Yu, Joon-Ho
2016-06-02
Despite rapid technical progress and demonstrable effectiveness for some types of diagnosis and therapy, much remains to be learned about clinical genome and exome sequencing (CGES) and its role within the practice of medicine. The Clinical Sequencing Exploratory Research (CSER) consortium includes 18 extramural research projects, one National Human Genome Research Institute (NHGRI) intramural project, and a coordinating center funded by the NHGRI and National Cancer Institute. The consortium is exploring analytic and clinical validity and utility, as well as the ethical, legal, and social implications of sequencing via multidisciplinary approaches; it has thus far recruited 5,577 participants across a spectrum of symptomatic and healthy children and adults by utilizing both germline and cancer sequencing. The CSER consortium is analyzing data and creating publically available procedures and tools related to participant preferences and consent, variant classification, disclosure and management of primary and secondary findings, health outcomes, and integration with electronic health records. Future research directions will refine measures of clinical utility of CGES in both germline and somatic testing, evaluate the use of CGES for screening in healthy individuals, explore the penetrance of pathogenic variants through extensive phenotyping, reduce discordances in public databases of genes and variants, examine social and ethnic disparities in the provision of genomics services, explore regulatory issues, and estimate the value and downstream costs of sequencing. The CSER consortium has established a shared community of research sites by using diverse approaches to pursue the evidence-based development of best practices in genomic medicine. Copyright © 2016 American Society of Human Genetics. All rights reserved.
de Faria, Eduardo B.; Barrow, Kory R.; Ruehle, Bradley T.; Parker, Jordan T.; Swartz, Elisa; Taylor-Howell, Cheryl; Kieta, Kaitlyn M.; Lees, Cynthia J.; Sleeper, Meg M.; Dobbin, Travis; Baron, Adam D.; Mohindra, Pranshu; MacVittie, Thomas J.
2015-01-01
Computed Tomography (CT) and Echocardiography (EC) are two imaging modalities that produce critical longitudinal data that can be analyzed for radiation-induced organ-specific injury to the lung and heart. The Medical Countermeasures Against Radiological Threats (MCART) consortium has a well-established animal model research platform that includes nonhuman primate (NHP) models of the acute radiation syndrome and the delayed effects of acute radiation exposure. These models call for a definition of the latency, incidence, severity, duration, and resolution of different organ-specific radiation-induced subsyndromes. The pulmonary subsyndromes and cardiac effects are a pair of inter-dependent syndromes impacted by exposure to potentially lethal doses of radiation. Establishing a connection between these will reveal important information about their interaction and progression of injury and recovery. Herein, we demonstrate the use of CT and EC data in the rhesus macaque models to define delayed organ injury thereby establishing: a) consistent and reliable methodology to assess radiation-induced damage to the lung and heart, b) an extensive database in normal age-matched NHP for key primary and secondary endpoints, c) identified problematic variables in imaging techniques and proposed solutions to maintain data integrity and d) initiated longitudinal analysis of potentially lethal radiation-induced damage to the lung and heart. PMID:26425907
Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.
2012-01-01
In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.
Military Suicide Research Consortium
2014-10-01
increasing and decreasing (or even ceasing entirely) across different periods of time but still building on itself with each progressive episode...community from suicide. One study found that social norms, high levels of support, identification with role models , and high self-esteem help pro - tect...in follow-up. o Conducted quality control checks of clinical data . Monitored safety, adverse events for DSMB reporting. Initiated Database
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Content based information retrieval in forensic image databases.
Geradts, Zeno; Bijhold, Jurrien
2002-03-01
This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.
Osman, Onur; Ucan, Osman N.
2008-01-01
Objective The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Materials and Methods Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. Results The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Conclusion Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules. PMID:18253070
Wide field imaging problems in radio astronomy
NASA Astrophysics Data System (ADS)
Cornwell, T. J.; Golap, K.; Bhatnagar, S.
2005-03-01
The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.
FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome
Sherman, Stephanie L.; Kidd, Sharon A.; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F.; Miller, Robert M.; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E.; Brown, W. Ted
2017-01-01
BACKGROUND AND OBJECTIVE Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. METHODS FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. RESULTS The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. CONCLUSIONS FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-today lives of individuals living with FXS and their families, and short-term and long-term outcomes. PMID:28814539
FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome.
Sherman, Stephanie L; Kidd, Sharon A; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F; Miller, Robert M; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E; Brown, W Ted
2017-06-01
Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-to-day lives of individuals living with FXS and their families, and short-term and long-term outcomes. Copyright © 2017 by the American Academy of Pediatrics.
Rutledge, Jonathan W; Spencer, Horace; Moreno, Mauricio A
2014-07-01
The University HealthSystem Consortium (UHC) database collects discharge information on patients treated at academic health centers throughout the United States. We sought to use this database to identify outcome predictors for patients undergoing total laryngectomy. A secondary end point was to assess the validity of the UHC's predictive risk mortality model in this cohort of patients. Retrospective review. Academic medical centers (tertiary referral centers) and their affiliate hospitals in the United States. Using the UHC discharge database, we retrieved and analyzed data for 4648 patients undergoing total laryngectomy who were discharged between October 2007 and January 2011 from all of the member institutions. Demographics, comorbidities, institutional data, and outcomes were retrieved. The length of stay and overall costs were significantly higher among female patients (P < .0001), while age was a predictor of intensive care unit stay (P = .014). The overall complication rate was higher among Asians (P = .019) and in patients with anemia and diabetes compared with other comorbidities. The average institutional case load was 1.92 cases/mo; we found an inverse correlation (R = -0.47) between the institutional case load and length of stay (P < .0001). The UHC admit mortality risk estimator was found to be an accurate predictor not only of mortality (P < .0002) but also of intensive care unit admission and complication rate (P < .0001). This study provides an overview of laryngectomy outcomes in a contemporary cohort of patients treated at academic health centers. UHC admit mortality risk is an excellent outcome predictor and a valuable tool for risk stratification in these patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo
2016-08-01
Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.
Stanhope, Steven J.; Wilken, Jason M.; Pruziner, Alison L.; Dearth, Christopher L.; Wyatt, Marilynn; Ziemke, CAPT Gregg W.; Strickland, Rachel; Milbourne, Suzanne A.; Kaufman, Kenton R.
2017-01-01
The Bridging Advanced Developments for Exceptional Rehabilitation (BADER) Consortium began in September 2011 as a cooperative agreement with the Department of Defense (DoD) Congressionally Directed Medical Research Programs Peer Reviewed Orthopaedic Research Program. A partnership was formed with DoD Military Treatment Facilities (MTFs), U.S. Department of Veterans Affairs (VA) Centers, the National Institutes of Health (NIH), academia, and industry to rapidly conduct innovative, high-impact, and sustainable clinically relevant research. The BADER Consortium has a unique research capacity-building focus that creates infrastructures and strategically connects and supports research teams to conduct multiteam research initiatives primarily led by MTF and VA investigators. BADER relies on strong partnerships with these agencies to strengthen and support orthopaedic rehabilitation research. Its focus is on the rapid forming and execution of projects focused on obtaining optimal functional outcomes for patients with limb loss and limb injuries. The Consortium is based on an NIH research capacity-building model that comprises essential research support components that are anchored by a set of BADER-funded and initiative-launching studies. Through a partnership with the DoD/VA Extremity Trauma and Amputation Center of Excellence, the BADER Consortium’s research initiative-launching program has directly supported the identification and establishment of eight BADER-funded clinical studies. BADER’s Clinical Research Core (CRC) staff, who are embedded within each of the MTFs, have supported an additional 37 non-BADER Consortium-funded projects. Additional key research support infrastructures that expedite the process for conducting multisite clinical trials include an omnibus Cooperative Research and Development Agreement and the NIH Clinical Trials Database. A 2015 Defense Health Board report highlighted the Consortium’s vital role, stating the research capabilities of the DoD Advanced Rehabilitation Centers are significantly enhanced and facilitated by the BADER Consortium. PMID:27849456
Biogeography of a human oral microbiome at the micron scale
Mark Welch, Jessica L.; Rossetti, Blair J.; Rieken, Christopher W.; Dewhirst, Floyd E.; Borisy, Gary G.
2016-01-01
The spatial organization of complex natural microbiomes is critical to understanding the interactions of the individual taxa that comprise a community. Although the revolution in DNA sequencing has provided an abundance of genomic-level information, the biogeography of microbiomes is almost entirely uncharted at the micron scale. Using spectral imaging fluorescence in situ hybridization as guided by metagenomic sequence analysis, we have discovered a distinctive, multigenus consortium in the microbiome of supragingival dental plaque. The consortium consists of a radially arranged, nine-taxon structure organized around cells of filamentous corynebacteria. The consortium ranges in size from a few tens to a few hundreds of microns in radius and is spatially differentiated. Within the structure, individual taxa are localized at the micron scale in ways suggestive of their functional niche in the consortium. For example, anaerobic taxa tend to be in the interior, whereas facultative or obligate aerobes tend to be at the periphery of the consortium. Consumers and producers of certain metabolites, such as lactate, tend to be near each other. Based on our observations and the literature, we propose a model for plaque microbiome development and maintenance consistent with known metabolic, adherence, and environmental considerations. The consortium illustrates how complex structural organization can emerge from the micron-scale interactions of its constituent organisms. The understanding that plaque community organization is an emergent phenomenon offers a perspective that is general in nature and applicable to other microbiomes. PMID:26811460
Asquith, William H.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing
2004-01-01
In the early 2000s, the Texas Department of Transportation funded several research projects to examine the unit hydrograph and rainfall hyetograph techniques for hydrologic design in Texas for the estimation of design flows for stormwater drainage systems. A research consortium comprised of Lamar University, Texas Tech University, the University of Houston, and the U.S. Geological Survey (USGS), was chosen to examine the unit hydrograph and rainfall hyetograph techniques. Rainfall and runoff data collected by the USGS at 91 streamflow-gaging stations in Texas formed a basis for the research. These data were collected as part of USGS small-watershed projects and urban watershed studies that began in the late 1950s and continued through most of the 1970s; a few gages were in operation in the mid-1980s. Selected hydrologic events from these studies were available in the form of over 220 printed reports, which offered the best aggregation of hydrologic data for the research objectives. Digital versions of the data did not exist. Therefore, significant effort was undertaken by the consortium to manually enter the data into a digital database from the printed record. The rainfall and runoff data for over 1,650 storms were entered. To enhance data integrity, considerable quality-control and quality-assurance efforts were conducted as the database was assembled and after assembly to enhance data integrity. This report documents the database and informs interested parties on its usage.
Inroads to predict in vivo toxicology-an introduction to the eTOX Project.
Briggs, Katharine; Cases, Montserrat; Heard, David J; Pastor, Manuel; Pognan, François; Sanz, Ferran; Schwab, Christof H; Steger-Hartmann, Thomas; Sutter, Andreas; Watson, David K; Wichard, Jörg D
2012-01-01
There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison ("read-across"), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX ("electronic toxicity") consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables.
Genetic Underpinnings of Alopecia Areata
... consortium of professional and voluntary organizations. NIH and NASA Activities NIAMS Image Gallery Removing Pure IGE Protein ... Community Outreach Bulletin Honoring Health: Resources for American Indians and Alaska Natives Last Reviewed: 09/22/2017 ...
Ellingson, Benjamin M; Abrey, Lauren E; Nelson, Sarah J; Kaufmann, Timothy J; Garcia, Josep; Chinot, Olivier; Saran, Frank; Nishikawa, Ryo; Henriksson, Roger; Mason, Warren P; Wick, Wolfgang; Butowski, Nicholas; Ligon, Keith L; Gerstner, Elizabeth R; Colman, Howard; de Groot, John; Chang, Susan; Mellinghoff, Ingo; Young, Robert J; Alexander, Brian M; Colen, Rivka; Taylor, Jennie W; Arrillaga-Romany, Isabel; Mehta, Arnav; Huang, Raymond Y; Pope, Whitney B; Reardon, David; Batchelor, Tracy; Prados, Michael; Galanis, Evanthia; Wen, Patrick Y; Cloughesy, Timothy F
2018-04-05
In the current study, we pooled imaging data in newly diagnosed GBM patients from international multicenter clinical trials, single institution databases, and multicenter clinical trial consortiums to identify the relationship between post-operative residual enhancing tumor volume and overall survival (OS). Data from 1,511 newly diagnosed GBM patients from 5 data sources were included in the current study: 1) a single institution database from UCLA (N=398; Discovery); 2) patients from the Ben and Cathy Ivy Foundation for Early Phase Clinical Trials Network Radiogenomics Database (N=262 from 8 centers; Confirmation); 3) the chemoradiation placebo arm from an international phase III trial (AVAglio; N=394 from 120 locations in 23 countries; Validation); 4) the experimental arm from AVAglio examining chemoradiation plus bevacizumab (N=404 from 120 locations in 23 countries; Exploratory Set 1); and 5) an Alliance (N0874) Phase I/II trial of vorinostat plus chemoradiation (N=53; Exploratory Set 2). Post-surgical, residual enhancing disease was quantified using T1 subtraction maps. Multivariate Cox regression models were used to determine influence of clinical variables, MGMT status, and residual tumor volume on OS. A log-linear relationship was observed between post-operative, residual enhancing tumor volume and OS in newly diagnosed GBM treated with standard chemoradiation. Post-operative tumor volume is a prognostic factor for OS (P<0.01), regardless of therapy, age, and MGMT promoter methylation status. Post-surgical, residual contrast-enhancing disease significantly negatively influences survival in patients with newly diagnosed glioblastoma treated with chemoradiation with or without concomitant experimental therapy.
NASA Astrophysics Data System (ADS)
Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.
2008-04-01
The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.
Improving Global Building Exposure Data for Disaster Forecasting, Mitigation, and Response
NASA Astrophysics Data System (ADS)
Chen, R. S.; Huyck, C.; Lewis, G.; Becker, M.; Vinay, S.; Tralli, D.; Eguchi, R.
2013-12-01
This paper describes an exploratory study being performed under the NASA Applied Sciences Program where the goal is to integrate Earth science data and information for disaster forecasting, mitigation and response. Specifically, we are delivering EO-derived built environment data and information for use in catastrophe (CAT) models and loss estimation tools. CAT models and loss estimation tools typically use GIS exposure databases to characterize the real-world environment. These datasets are often a source of great uncertainty in the loss estimates, particularly in international events, because the data are incomplete, and sometimes inaccurate and disparate in quality from one region to another. Preliminary research by project team members as part of the Global Earthquake Model (GEM) consortium suggests that a strong relationship exists between the height and volume of built-up areas and NASA data products from the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Moderate Resolution Imaging Spectroradiometer (MODIS), and the NASA Socioeconomic Data and Applications Center (SEDAC). Applying this knowledge within the framework of the GEM Global Exposure Database (GED) is significantly enhancing our ability to quantify building exposure, particularly in developing countries and emerging insurance markets. Global insurance products that have a more comprehensive basis for assessing risk and exposure - as from EO-derived data and information assimilated into CAT models and loss estimation tools - will help a) help to transform the way in which we measure, monitor and assess the vulnerability of our communities globally, and in turn, b) help encourage the investments needed - especially in the developing world - stimulating economic growth and actions that would lead to a more disaster-resilient world. Improved building exposure data will also be valuable for near-real time applications such as emergency response planning and post-disaster damage and needs assessment.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
The National Land Cover Database
Homer, Collin G.; Fry, Joyce A.; Barnes, Christopher A.
2012-01-01
The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2011-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT® for use in college admission. The first sample included first-time, first-year students entering college in fall 2006, with 110 institutions providing students'…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2006-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT®, which is used in college admission and consists of three sections: critical reading (SAT-CR), mathematics (SAT-M) and writing (SAT-W). This report builds on a body of…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2012-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT®, which consists of three sections: critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), for use in college admission. A study by Mattern and…
NASA Technical Reports Server (NTRS)
1994-01-01
A consortium of researchers from Jet Propulsion Laboratory and three other organizations used charged coupled devices (CCDs) and other imaging enhancement technology to decipher previously unreadable portions of the Dead Sea Scrolls. The technique has potentially important implications for archeology.
Hybrid detection of lung nodules on CT scan images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Lin; Tan, Yongqiang; Schwartz, Lawrence H.
Purpose: The diversity of lung nodules poses difficulty for the current computer-aided diagnostic (CAD) schemes for lung nodule detection on computed tomography (CT) scan images, especially in large-scale CT screening studies. We proposed a novel CAD scheme based on a hybrid method to address the challenges of detection in diverse lung nodules. Methods: The hybrid method proposed in this paper integrates several existing and widely used algorithms in the field of nodule detection, including morphological operation, dot-enhancement based on Hessian matrix, fuzzy connectedness segmentation, local density maximum algorithm, geodesic distance map, and regression tree classification. All of the adopted algorithmsmore » were organized into tree structures with multi-nodes. Each node in the tree structure aimed to deal with one type of lung nodule. Results: The method has been evaluated on 294 CT scans from the Lung Image Database Consortium (LIDC) dataset. The CT scans were randomly divided into two independent subsets: a training set (196 scans) and a test set (98 scans). In total, the 294 CT scans contained 631 lung nodules, which were annotated by at least two radiologists participating in the LIDC project. The sensitivity and false positive per scan for the training set were 87% and 2.61%. The sensitivity and false positive per scan for the testing set were 85.2% and 3.13%. Conclusions: The proposed hybrid method yielded high performance on the evaluation dataset and exhibits advantages over existing CAD schemes. We believe that the present method would be useful for a wide variety of CT imaging protocols used in both routine diagnosis and screening studies.« less
HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.
Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin
2016-07-01
An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations
Kim, Seung Won; Kim, Bae-Hwan
2016-01-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.
Kim, Seung Won; Kim, Bae-Hwan
2016-07-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.
Wright, Judy M; Cottrell, David J; Mir, Ghazala
2014-07-01
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database
NASA Astrophysics Data System (ADS)
Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2015-12-01
The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.
MGH-USC Human Connectome Project Datasets with Ultra-High b-Value Diffusion MRI
Fan, Qiuyun; Witzel, Thomas; Nummenmaa, Aapo; Van Dijk, Koene R.A.; Van Horn, John D.; Drews, Michelle K.; Somerville, Leah H.; Sheridan, Margaret A.; Santillana, Rosario M.; Snyder, Jenna; Hedden, Trey; Shaw, Emily E.; Hollinshead, Marisa O.; Renvall, Ville; Zanzonico, Roberta; Keil, Boris; Cauley, Stephen; Polimeni, Jonathan R.; Tisdall, Dylan; Buckner, Randy L.; Wedeen, Van J.; Wald, Lawrence L.; Toga, Arthur W.; Rosen, Bruce R.
2015-01-01
The MGH-USC CONNECTOM MRI scanner housed at the Massachusetts General Hospital (MGH) is a major hardware innovation of the Human Connectome Project (HCP). The 3T CONNECTOM scanner is capable of producing magnetic field gradient of up to 300 mT/m strength for in vivo human brain imaging, which greatly shortens the time spent on diffusion encoding, and decreases the signal loss due to T2 decay. To demonstrate the capability of the novel gradient system, data of healthy adult participants were acquired for this MGH-USC Adult Diffusion Dataset (N=35), minimally preprocessed, and shared through the Laboratory of Neuro Imaging Image Data Archive (LONI IDA) and the WU-Minn Connectome Database (ConnecomeDB). Another purpose of sharing the data is to facilitate methodological studies of diffusion MRI (dMRI) analyses utilizing high diffusion contrast, which perhaps is not easily feasible with standard MR gradient system. In addition, acquisition of the MGH-Harvard-USC Lifespan Dataset is currently underway to include 120 healthy participants ranging from 8 to 90 years old, which will also be shared through LONI IDA and ConnectomeDB. Here we describe the efforts of the MGH-USC HCP consortium in acquiring and sharing the ultra-high b-value diffusion MRI data and provide a report on data preprocessing and access. We conclude with a demonstration of the example data, along with results of standard diffusion analyses, including q-ball Orientation Distribution Function (ODF) reconstruction and tractography. PMID:26364861
ERIC Educational Resources Information Center
Lee, Connie W.; Hinson, Tony M.
This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…
NLCD tree canopy cover (TCC) maps of the contiguous United States and coastal Alaska
Robert Benton; Bonnie Ruefenacht; Vicky Johnson; Tanushree Biswas; Craig Baker; Mark Finco; Kevin Megown; John Coulston; Ken Winterberger; Mark Riley
2015-01-01
A tree canopy cover (TCC) map is one of three elements in the National Land Cover Database (NLCD) 2011 suite of nationwide geospatial data layers. In 2010, the USDA Forest Service (USFS) committed to creating the TCC layer as a member of the Multi-Resolution Land Cover (MRLC) consortium. A general methodology for creating the TCC layer was reported at the 2012 FIA...
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2013-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year. The sample…
ERIC Educational Resources Information Center
Mattern, Krista D.; Patterson, Brian F.
2012-01-01
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The…
Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Schneider, Michel; Bansal, Parit; Bridge, Alan J; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis
2016-01-01
The Universal Protein Resource (UniProt, http://www.uniprot.org ) consortium is an initiative of the SIB Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) to provide the scientific community with a central resource for protein sequences and functional information. The UniProt consortium maintains the UniProt KnowledgeBase (UniProtKB), updated every 4 weeks, and several supplementary databases including the UniProt Reference Clusters (UniRef) and the UniProt Archive (UniParc).The Swiss-Prot section of the UniProt KnowledgeBase (UniProtKB/Swiss-Prot) contains publicly available expertly manually annotated protein sequences obtained from a broad spectrum of organisms. Plant protein entries are produced in the frame of the Plant Proteome Annotation Program (PPAP), with an emphasis on characterized proteins of Arabidopsis thaliana and Oryza sativa. High level annotations provided by UniProtKB/Swiss-Prot are widely used to predict annotation of newly available proteins through automatic pipelines.The purpose of this chapter is to present a guided tour of a UniProtKB/Swiss-Prot entry. We will also present some of the tools and databases that are linked to each entry.
Pan, Min; Cong, Peikuan; Wang, Yue; Lin, Changsong; Yuan, Ying; Dong, Jian; Banerjee, Santasree; Zhang, Tao; Chen, Yanling; Zhang, Ting; Chen, Mingqing; Hu, Peter; Zheng, Shu; Zhang, Jin; Qi, Ming
2011-12-01
The Human Variome Project (HVP) is an international consortium of clinicians, geneticists, and researchers from over 30 countries, aiming to facilitate the establishment and maintenance of standards, systems, and infrastructure for the worldwide collection and sharing of all genetic variations effecting human disease. The HVP-China Node will build new and supplement existing databases of genetic diseases. As the first effort, we have created a novel variant database of BRCA1 and BRCA2, mismatch repair genes (MMR), and APC genes for breast cancer, Lynch syndrome, and familial adenomatous polyposis (FAP), respectively, in the Chinese population using the Leiden Open Variation Database (LOVD) format. We searched PubMed and some Chinese search engines to collect all the variants of these genes in the Chinese population that have already been detected and reported. There are some differences in the gene variants between the Chinese population and that of other ethnicities. The database is available online at http://www.genomed.org/LOVD/. Our database will appear to users who survey other LOVD databases (e.g., by Google search, or by NCBI GeneTests search). Remote submissions are accepted, and the information is updated monthly. © 2011 Wiley Periodicals, Inc.
The IntAct molecular interaction database in 2012
Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning
2012-01-01
IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220
Häupl, T; Skapenko, A; Hoppe, B; Skriner, K; Burkhardt, H; Poddubnyy, D; Ohrndorf, S; Sewerin, P; Mansmann, U; Stuhlmüller, B; Schulze-Koops, H; Burmester, G-R
2018-05-01
Rheumatic diseases are among the most common chronic inflammatory disorders. Besides severe pain and progressive destruction of the joints, rheumatoid arthritis (RA), spondyloarthritides (SpA) and psoriatic arthritis (PsA) impair working ability, reduce quality of life and if treated insufficiently may enhance mortality. With the introduction of biologics to treat these diseases, the demand for biomarkers of early diagnosis and therapeutic stratification has been growing continuously. The main goal of the consortium ArthroMark is to identify new biomarkers and to apply modern imaging technologies for diagnosis, follow-up assessment and stratification of patients with RA, SpA and PsA. With the development of new biomarkers for these diseases, the ArthroMark project contributes to research in chronic diseases of the musculoskeletal system. The cooperation between different national centers will utilize site-specific resources, such as biobanks and clinical studies for sharing and gainful networking of individual core areas in biomarker analysis. Joint data management and harmonization of data assessment as well as best practice characterization of patients with new imaging technologies will optimize quality of marker validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borole, Abhijeet P; Hamilton, Choo Yieng; Vishnivetskaya, Tatiana A
2011-01-01
Using a pre-enriched microbial consortium as the inoculum and continuous supply of carbon source, improvement in performance of a three-dimensional, flow-through MFC anode utilizing ferricyanide cathode was investigated. The power density increased from 170 W/m3 (1800 mW/m2) to 580 W/m3 (6130 mW/m2), when the carbon loading increased from 2.5 g/l-day to 50 g/l-day. The coulombic efficiency (CE) decreased from 90% to 23% with increasing carbon loading. The CEs are among the highest reported for glucose and lactate as the substrate with the maximum current density reaching 15.1 A/m2. This suggests establishment of a very high performance exoelectrogenic microbial consortium atmore » the anode. A maximum energy conversion efficiency of 54% was observed at a loading of 2.5 g/l-day. Biological characterization of the consortium showed presence of Burkholderiales and Rhodocyclales as the dominant members. Imaging of the biofilms revealed thinner biofilms compared to the inoculum MFC, but a 1.9-fold higher power density.« less
The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data.
Thompson, Paul M; Stein, Jason L; Medland, Sarah E; Hibar, Derrek P; Vasquez, Alejandro Arias; Renteria, Miguel E; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J; Martin, Nicholas G; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C; Andreassen, Ole A; Apostolova, Liana G; Appel, Katja; Armstrong, Nicola J; Aribisala, Benjamin; Bastin, Mark E; Bauer, Michael; Bearden, Carrie E; Bergmann, Orjan; Binder, Elisabeth B; Blangero, John; Bockholt, Henry J; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I; Booth, Tom; Bowman, Ian J; Bralten, Janita; Brouwer, Rachel M; Brunner, Han G; Brohawn, David G; Buckner, Randy L; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R; Calhoun, Vince D; Cannon, Dara M; Cantor, Rita M; Carless, Melanie A; Caseras, Xavier; Cavalleri, Gianpiero L; Chakravarty, M Mallar; Chang, Kiki D; Ching, Christopher R K; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E; Czisch, Michael; Deary, Ian J; de Geus, Eco J C; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E; Foroud, Tatiana; Fox, Peter T; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C; Godlewska, Beata; Goldstein, Rita Z; Gollub, Randy L; Grabe, Hans J; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E; Gur, Ruben C; Göring, Harald H H; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B; Hall, Jeremy; Hardy, John; Hartman, Catharina A; Hass, Johanna; Hatton, Sean N; Haukvik, Unn K; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J; Hollinshead, Marisa; Holmes, Avram J; Homuth, Georg; Hoogman, Martine; Hong, L Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E; Hwang, Kristy S; Jack, Clifford R; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G; Kahn, René S; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B J; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A; Lauriello, John; Lawrie, Stephen M; Lee, Phil H; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D; Li, Chiang-Shan; Liberg, Benny; Liewald, David C; Liu, Xinmin; Lopez, Lorna M; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W J; Macqueen, Glenda M; Malt, Ulrik F; Mandl, René; Manoach, Dara S; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M; McMahon, Francis J; McMahon, Katie L; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W; Morris, Derek W; Moses, Eric K; Mueller, Bryon A; Muñoz Maniega, Susana; Mühleisen, Thomas W; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E; Nilsson, Lars-Göran; Nugent, Allison C; Nyberg, Lars; Olvera, Rene L; Oosterlaan, Jaap; Ophoff, Roel A; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D; Penninx, Brenda W; Peterson, Charles P; Pfennig, Andrea; Phillips, Mary; Pike, G Bruce; Poline, Jean-Baptiste; Potkin, Steven G; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L; Roffman, Joshua L; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J; Royle, Natalie A; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S; Salami, Alireza; Satterthwaite, Theodore D; Savitz, Jonathan; Saykin, Andrew J; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G; Schork, Andrew J; Schulz, S Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M; Simmons, Andrew; Sisodiya, Sanjay M; Smith, Colin; Smoller, Jordan W; Soares, Jair C; Sponheim, Scott R; Sprooten, Emma; Starr, John M; Steen, Vidar M; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G; Teumer, Alexander; Toga, Arthur W; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; Van den Heuvel, Martijn; van der Wee, Nic J; van Eijk, Kristel; van Erp, Theo G M; van Haren, Neeltje E M; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C; Veltman, Dick J; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M; Weale, Michael E; Weiner, Michael W; Wen, Wei; Westlye, Lars T; Whalley, Heather C; Whelan, Christopher D; White, Tonya; Winkler, Anderson M; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P; Thalamuthu, Anbupalam; Schofield, Peter R; Freimer, Nelson B; Lawrence, Natalia S; Drevets, Wayne
2014-06-01
The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
1989-03-01
Toys, is a model of the dinosaur Tyrannosaurus Rex . This particular test case is characterized by sharply discontinuous depths varying over a wide...are not shown in these figures). 7B-C-13 Figure 7: T. Rex Scene - Figure 8: T. Rex Scene - Left Image of Tinker Right Image Toy Object (j 1/’.) C...8217: Figure 9: T. Rex Scene - Figure 10: T. Rex Scene - Connected Contours Extracted Connected Contours Extracted from Left Image from Right Image 7B-C-14 400
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
Proceedings -- US Russian workshop on fuel cell technologies (in English;Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, B.; Sylwester, A.
1996-04-01
On September 26--28, 1995, Sandia National Laboratories sponsored the first Joint US/Russian Workshop on Fuel Cell Technology at the Marriott Hotel in Albuquerque, New Mexico. This workshop brought together the US and Russian fuel cell communities as represented by users, producers, R and D establishments and government agencies. Customer needs and potential markets in both countries were discussed to establish a customer focus for the workshop. Parallel technical sessions defined research needs and opportunities for collaboration to advance fuel cell technology. A desired outcome of the workshop was the formation of a Russian/American Fuel Cell Consortium to advance fuel cellmore » technology for application in emerging markets in both countries. This consortium is envisioned to involve industry and national labs in both countries. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
Bannasch, Detlev; Mehrle, Alexander; Glatting, Karl-Heinz; Pepperkok, Rainer; Poustka, Annemarie; Wiemann, Stefan
2004-01-01
We have implemented LIFEdb (http://www.dkfz.de/LIFEdb) to link information regarding novel human full-length cDNAs generated and sequenced by the German cDNA Consortium with functional information on the encoded proteins produced in functional genomics and proteomics approaches. The database also serves as a sample-tracking system to manage the process from cDNA to experimental read-out and data interpretation. A web interface enables the scientific community to explore and visualize features of the annotated cDNAs and ORFs combined with experimental results, and thus helps to unravel new features of proteins with as yet unknown functions. PMID:14681468
1991-11-01
Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George 14. SUBJECT TERMS 15. NUMBER OF PAGES...Keith B. Farr Nicholas George Backscatter from a Tilted Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse ...correlation components. Uf) c)z 0 CL C/) Ix I- z 0 0 LL C,z -J a 0l IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSES Bryan J. Stossel and Nicholas George
Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi
2012-01-01
Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606
Standardizing the nomenclature of Martian impact crater ejecta morphologies
Barlow, Nadine G.; Boyce, Joseph M.; Costard, Francois M.; Craddock, Robert A.; Garvin, James B.; Sakimoto, Susan E.H.; Kuzmin, Ruslan O.; Roddy, David J.; Soderblom, Laurence A.
2000-01-01
The Mars Crater Morphology Consortium recommends the use of a standardized nomenclature system when discussing Martian impact crater ejecta morphologies. The system utilizes nongenetic descriptors to identify the various ejecta morphologies seen on Mars. This system is designed to facilitate communication and collaboration between researchers. Crater morphology databases will be archived through the U.S. Geological Survey in Flagstaff, where a comprehensive catalog of Martian crater morphologic information will be maintained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases
Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning
2014-01-01
IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451
LMSD: LIPID MAPS structure database
Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar
2007-01-01
The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933
Education and Strategic Research Collaborations
Los Alamos National Laboratory National Security Education Center Image Search Site submit LaboratoryNational Security Education Center Menu Program Offices Energy Security Council New Mexico Consortium Geophysics, Planetary Physics, Signatures Events Collaborations for education and strategic research, student
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
NASA Astrophysics Data System (ADS)
Yamamoto, Masa-Yuki; Okamoto, Sumito; Miyoshi, Terunori; Takamura, Yuzaburo; Aoshima, Akira; Hinokuchi, Jin
As one of the space educational projects in Japan, a triangulation observation project of TLE (Transient Luminous Events: sprites, elves, blue-jets, etc.) has been carried out since 2006 in collaboration between 29 Super Science High-schools (SSH) and Kochi University of Technol-ogy (KUT). Following with previous success of sprite observations by "Astro High-school" since 2004, the SSH consortium Kochi was established as a national space educational project sup-ported by Japan Science and Technology Agency (JST). High-sensitivity CCD camera (Watec, Neptune-100) with 6 mm F/1.4 C-mount lens (Fujinon) and motion-detective software (UFO-Capture, SonotaCo) were given to each participating team in order to monitor Northern night sky of Japan with almost full-coverage. During each school year (from April to March in Japan) since 2006, thousands of TLE images were taken by many student teams, with considerably large numbers of successful triangulations, i.e., (School year, Numbers of TLE observations, Numbers of triangulations) are (2006, 43, 3), (2007, 441, 95), (2008, 734, 115), and (2009, 337, 78). Note that, school year in Japan begins on April 1 and ends on March 31. The observation campaign began in December 2006, numbers are as of Feb. 28, 2010. Recently, some high schools started wide field observations using multiple cameras, and others started VLF observations using handmade loop antennae and amplifiers. Infomation exchange among the SSH consortium Kochi is frequently communicated with scientific discussion via KUT's mailing lists. Also, interactions with amateur observers in Japan are made through an internet forum of "SonotaCo Network Japan" (http://sonotaco.jp). Not only as an educational project but also as a scientific one, the project is also in success. In February 2008, simultaneous observations of Elves were obtained, in November 2009 a Giant "Graft-shaped" Sprites driven by Jets was clearly imaged with VLF signals. Most recently, ob-servations of Elves and sprite halos with stripes like wave structures on airglow were successfully imaged in January 2010. In this talk, four years activities of the SSH consortium Kochi will be presented by participating high-school students and teachers with their own impressions.
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
NASA Astrophysics Data System (ADS)
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
2007-12-01
The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko
2014-07-01
TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine
Elsik, Christine G.; Tayal, Aditi; Diesh, Colin M.; Unni, Deepak R.; Emery, Marianne L.; Nguyen, Hung N.; Hagen, Darren E.
2016-01-01
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. PMID:26578564
Purchasing Officers, Often Unappreciated, Point to Importance of Their Campus Role.
ERIC Educational Resources Information Center
Heller, Scott
1985-01-01
College purchasing officers need to dispel their image as penny-pinchers looking for the low bid and purveyors of paperwork who keep employees from the products they want. With competitive bidding and consortium buying, universities have realized significant savings. (MLW)
Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng
2017-03-01
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
The Pan-STARRS data server and integrated data query tool
NASA Astrophysics Data System (ADS)
Guo, Jhen-Kuei; Chen, Wen-Ping; Lin, Chien-Cheng; Chen, Ying-Tung; Lin, Hsing-Wen
2013-06-01
The Pan-STARRS project is operated by an international consortium. Located in Haleakala, Hawaii, the Pan-STARRS telescope system patrols the entire visible sky several times a month, with an aim to identify and characterize varying celestial objects of phenomena or in brightness (supernovae, novae, variable stars, etc) or in position (comets, asteroids, near-earth objects, X-planet etc.) PS1 science mission has started officially from May, 2010 and expects to end in the end of 2013. As of early 2012, every patch of sky observable from Hawaii has been observed in at least 5 bands (g', r', i', z', y') for 5 to 40 epochs. We have set up a data depository at NCU to serve the users in Taiwan. The massive amounts of Pan-STARRS data are downloaded via Internet from the Institute for Astronomy, University of Hawaii whenever new observations are obtained and processed. So far we have stored a total of 200 TB worth of data. In addition to star/galaxy catalogs, a postage stamp server provides access to FITS images. The Pan-STARRS Published Science Products Subsystem (PSPS) has recently passed its operational readiness, that provides users to query individual PS1 measurements. Here we present the data query tool to interface with the PS1 catalogs and postage stamp images, together with other complementary databases such as 2MASS and other data at IRSA (NASA/IPAC Infrared Science Archive).
BEaST: brain extraction based on nonlocal segmentation technique.
Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir; Manjón, José V; Leung, Kelvin K; Guizard, Nicolas; Wassef, Shafik N; Østergaard, Lasse Riis; Collins, D Louis
2012-02-01
Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors. Copyright © 2011 Elsevier Inc. All rights reserved.
Biotechnological potential of microbial consortia and future perspectives.
Bhatia, Shashi Kant; Bhatia, Ravi Kant; Choi, Yong-Keun; Kan, Eunsung; Kim, Yun-Gon; Yang, Yung-Hun
2018-05-15
Design of a microbial consortium is a newly emerging field that enables researchers to extend the frontiers of biotechnology from a pure culture to mixed cultures. A microbial consortium enables microbes to use a broad range of carbon sources. It provides microbes with robustness in response to environmental stress factors. Microbes in a consortium can perform complex functions that are impossible for a single organism. With advancement of technology, it is now possible to understand microbial interaction mechanism and construct consortia. Microbial consortia can be classified in terms of their construction, modes of interaction, and functions. Here we discuss different trends in the study of microbial functions and interactions, including single-cell genomics (SCG), microfluidics, fluorescent imaging, and membrane separation. Community profile studies using polymerase chain-reaction denaturing gradient gel electrophoresis (PCR-DGGE), amplified ribosomal DNA restriction analysis (ARDRA), and terminal restriction fragment-length polymorphism (T-RFLP) are also reviewed. We also provide a few examples of their possible applications in areas of biopolymers, bioenergy, biochemicals, and bioremediation.
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setio, Arnaud A. A., E-mail: arnaud.arindraadiyoso@radboudumc.nl; Jacobs, Colin; Gelderblom, Jaap
Purpose: Current computer-aided detection (CAD) systems for pulmonary nodules in computed tomography (CT) scans have a good performance for relatively small nodules, but often fail to detect the much rarer larger nodules, which are more likely to be cancerous. We present a novel CAD system specifically designed to detect solid nodules larger than 10 mm. Methods: The proposed detection pipeline is initiated by a three-dimensional lung segmentation algorithm optimized to include large nodules attached to the pleural wall via morphological processing. An additional preprocessing is used to mask out structures outside the pleural space to ensure that pleural and parenchymalmore » nodules have a similar appearance. Next, nodule candidates are obtained via a multistage process of thresholding and morphological operations, to detect both larger and smaller candidates. After segmenting each candidate, a set of 24 features based on intensity, shape, blobness, and spatial context are computed. A radial basis support vector machine (SVM) classifier was used to classify nodule candidates, and performance was evaluated using ten-fold cross-validation on the full publicly available lung image database consortium database. Results: The proposed CAD system reaches a sensitivity of 98.3% (234/238) and 94.1% (224/238) large nodules at an average of 4.0 and 1.0 false positives/scan, respectively. Conclusions: The authors conclude that the proposed dedicated CAD system for large pulmonary nodules can identify the vast majority of highly suspicious lesions in thoracic CT scans with a small number of false positives.« less
Digital Dental X-ray Database for Caries Screening
NASA Astrophysics Data System (ADS)
Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila
2016-06-01
Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.
Building An Integrated Neurodegenerative Disease Database At An Academic Health Center
Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.
2010-01-01
Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
Rudnick, Paul A.; Markey, Sanford P.; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V.; Edwards, Nathan J.; Thangudu, Ratna R.; Ketchum, Karen A.; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E.
2016-01-01
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics datasets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and non-reference markers of cancer. The CPTAC labs have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these datasets were produced from 2D LC-MS/MS analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) Peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false discovery rate (FDR)-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the datasets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level (“rolled-up”) precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ™. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data, enabling comparisons between different samples and cancer types as well as across the major ‘omics fields. PMID:26860878
Rudnick, Paul A; Markey, Sanford P; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V; Edwards, Nathan J; Thangudu, Ratna R; Ketchum, Karen A; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E
2016-03-04
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics data sets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and nonreference markers of cancer. The CPTAC laboratories have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these data sets were produced from 2D liquid chromatography-tandem mass spectrometry analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false-discovery rate-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the data sets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level ("rolled-up") precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data to enable comparisons between different samples and cancer types as well as across the major omics fields.
Infrastructure resources for clinical research in amyotrophic lateral sclerosis.
Sherman, Alexander V; Gubitz, Amelie K; Al-Chalabi, Ammar; Bedlack, Richard; Berry, James; Conwit, Robin; Harris, Brent T; Horton, D Kevin; Kaufmann, Petra; Leitner, Melanie L; Miller, Robert; Shefner, Jeremy; Vonsattel, Jean Paul; Mitsumoto, Hiroshi
2013-05-01
Clinical trial networks, shared clinical databases, and human biospecimen repositories are examples of infrastructure resources aimed at enhancing and expediting clinical and/or patient oriented research to uncover the etiology and pathogenesis of amyotrophic lateral sclerosis (ALS), a rapidly progressive neurodegenerative disease that leads to the paralysis of voluntary muscles. The current status of such infrastructure resources, as well as opportunities and impediments, were discussed at the second Tarrytown ALS meeting held in September 2011. The discussion focused on resources developed and maintained by ALS clinics and centers in North America and Europe, various clinical trial networks, U.S. government federal agencies including the National Institutes of Health (NIH), the Agency for Toxic Substances and Disease Registry (ATSDR) and the Centers for Disease Control and Prevention (CDC), and several voluntary disease organizations that support ALS research activities. Key recommendations included 1) the establishment of shared databases among individual ALS clinics to enhance the coordination of resources and data analyses; 2) the expansion of quality-controlled human biospecimen banks; and 3) the adoption of uniform data standards, such as the recently developed Common Data Elements (CDEs) for ALS clinical research. The value of clinical trial networks such as the Northeast ALS (NEALS) Consortium and the Western ALS (WALS) Consortium was recognized, and strategies to further enhance and complement these networks and their research resources were discussed.
Automated processing of shoeprint images based on the Fourier transform for use in forensic science.
de Chazal, Philip; Flynn, John; Reilly, Richard B
2005-03-01
The development of a system for automatically sorting a database of shoeprint images based on the outsole pattern in response to a reference shoeprint image is presented. The database images are sorted so that those from the same pattern group as the reference shoeprint are likely to be at the start of the list. A database of 476 complete shoeprint images belonging to 140 pattern groups was established with each group containing two or more examples. A panel of human observers performed the grouping of the images into pattern categories. Tests of the system using the database showed that the first-ranked database image belongs to the same pattern category as the reference image 65 percent of the time and that a correct match appears within the first 5 percent of the sorted images 87 percent of the time. The system has translational and rotational invariance so that the spatial positioning of the reference shoeprint images does not have to correspond with the spatial positioning of the shoeprint images of the database. The performance of the system for matching partial-prints was also determined.
Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro
2013-05-16
Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.
About CIB | Division of Cancer Prevention
The Consortium was created to improve cancer screening, early detection of aggressive cancer, assessment of cancer risk and cancer diagnosis aimed at integrating multi-modality imaging strategies and multiplexed biomarker methodologies into a singular complementary approach. Investigator perform collaborative studies, exchange information, share knowledge and leverage common
MGH-USC Human Connectome Project datasets with ultra-high b-value diffusion MRI.
Fan, Qiuyun; Witzel, Thomas; Nummenmaa, Aapo; Van Dijk, Koene R A; Van Horn, John D; Drews, Michelle K; Somerville, Leah H; Sheridan, Margaret A; Santillana, Rosario M; Snyder, Jenna; Hedden, Trey; Shaw, Emily E; Hollinshead, Marisa O; Renvall, Ville; Zanzonico, Roberta; Keil, Boris; Cauley, Stephen; Polimeni, Jonathan R; Tisdall, Dylan; Buckner, Randy L; Wedeen, Van J; Wald, Lawrence L; Toga, Arthur W; Rosen, Bruce R
2016-01-01
The MGH-USC CONNECTOM MRI scanner housed at the Massachusetts General Hospital (MGH) is a major hardware innovation of the Human Connectome Project (HCP). The 3T CONNECTOM scanner is capable of producing a magnetic field gradient of up to 300 mT/m strength for in vivo human brain imaging, which greatly shortens the time spent on diffusion encoding, and decreases the signal loss due to T2 decay. To demonstrate the capability of the novel gradient system, data of healthy adult participants were acquired for this MGH-USC Adult Diffusion Dataset (N=35), minimally preprocessed, and shared through the Laboratory of Neuro Imaging Image Data Archive (LONI IDA) and the WU-Minn Connectome Database (ConnectomeDB). Another purpose of sharing the data is to facilitate methodological studies of diffusion MRI (dMRI) analyses utilizing high diffusion contrast, which perhaps is not easily feasible with standard MR gradient system. In addition, acquisition of the MGH-Harvard-USC Lifespan Dataset is currently underway to include 120 healthy participants ranging from 8 to 90 years old, which will also be shared through LONI IDA and ConnectomeDB. Here we describe the efforts of the MGH-USC HCP consortium in acquiring and sharing the ultra-high b-value diffusion MRI data and provide a report on data preprocessing and access. We conclude with a demonstration of the example data, along with results of standard diffusion analyses, including q-ball Orientation Distribution Function (ODF) reconstruction and tractography. Copyright © 2015 Elsevier Inc. All rights reserved.
Animal Detection in Natural Images: Effects of Color and Image Database
Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.
2013-01-01
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.
2006-12-01
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. Additionally, a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include over 40 papers from studies on the Hawaiian Islands and Mexico, data compilations from archeomagnetic studies and updates to the lake sediment dataset.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
Wavelet optimization for content-based image retrieval in medical databases.
Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C
2010-04-01
We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.
Jahanshad, Neda; Kochunov, Peter; Sprooten, Emma; Mandl, René C.; Nichols, Thomas E.; Almassy, Laura; Blangero, John; Brouwer, Rachel M.; Curran, Joanne E.; de Zubicaray, Greig I.; Duggirala, Ravi; Fox, Peter T.; Hong, L. Elliot; Landman, Bennett A.; Martin, Nicholas G.; McMahon, Katie L.; Medland, Sarah E.; Mitchell, Braxton D.; Olvera, Rene L.; Peterson, Charles P.; Starr, John M.; Sussmann, Jessika E.; Toga, Arthur W.; Wardlaw, Joanna M.; Wright, Margaret J.; Hulshoff Pol, Hilleke E.; Bastin, Mark E.; McIntosh, Andrew M.; Deary, Ian J.; Thompson, Paul M.; Glahn, David C.
2013-01-01
The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA–DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18–85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/). PMID:23629049
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Public Relations in Special Libraries.
ERIC Educational Resources Information Center
Rutkowski, Hollace Ann; And Others
1991-01-01
This theme issue includes 11 articles on public relations (PR) in special libraries. Highlights include PR at the Special Libraries Association (SLA); sources for marketing research for libraries; developing a library image; sample PR releases; brand strategies for libraries; case studies; publicizing a consortium; and a bibliography of pertinent…
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courteau, J.
1991-10-11
Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans
NASA Astrophysics Data System (ADS)
Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.
2008-03-01
Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.
1989-10-01
Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to
2013-09-30
profiles of right whales Eubalaena glacialis from the North Atlantic Right Whale Consortium; 2) DNA profiles of sperm whales Physeter macrocephalus...of other cetacean databases in Wildbook format (e.g., North Atlantic right whales, sperm whales and Hector’s dolphins); 8) Supported continuing...of sperm whales, using samples collected during the 5-year Voyage of the Odyssey; and 3) DNA profiles of Hector’s dolphins from Cloudy Bay, New
CFD Aerothermodynamic Characterization Of The IXV Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Roncioni, P.; Ranuzzi, G.; Marini, M.; Battista, F.; Rufolo, G. C.
2011-05-01
In this paper, and in the framework of the ESA technical assistance activities for IXV project, the numerical activities carried out by ASI/CIRA to support the development of Aerodynamic and Aerothermodynamic databases, independent from the ones developed by the IXV Industrial consortium, are reported. A general characterization of the IXV aerothermodynamic environment has been also provided for cross checking and verification purposes. The work deals with the first year activities of Technical Assistance Contract agreed between the Italian Space Agency/CIRA and ESA.
Normative Databases for Imaging Instrumentation.
Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray
2015-08-01
To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.
Normative Databases for Imaging Instrumentation
Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray
2015-01-01
Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003
Building an integrated neurodegenerative disease database at an academic health center.
Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q
2011-07-01
It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
FANTOM5 CAGE profiles of human and mouse reprocessed for GRCh38 and GRCm38 genome assemblies.
Abugessaisa, Imad; Noguchi, Shuhei; Hasegawa, Akira; Harshbarger, Jayson; Kondo, Atsushi; Lizio, Marina; Severin, Jessica; Carninci, Piero; Kawaji, Hideya; Kasukawa, Takeya
2017-08-29
The FANTOM5 consortium described the promoter-level expression atlas of human and mouse by using CAGE (Cap Analysis of Gene Expression) with single molecule sequencing. In the original publications, GRCh37/hg19 and NCBI37/mm9 assemblies were used as the reference genomes of human and mouse respectively; later, the Genome Reference Consortium released newer genome assemblies GRCh38/hg38 and GRCm38/mm10. To increase the utility of the atlas in forthcoming researches, we reprocessed the data to make them available on the recent genome assemblies. The data include observed frequencies of transcription starting sites (TSSs) based on the realignment of CAGE reads, and TSS peaks that are converted from those based on the previous reference. Annotations of the peak names were also updated based on the latest public databases. The reprocessed results enable us to examine frequencies of transcription initiations on the recent genome assemblies and to refer promoters with updated information across the genome assemblies consistently.
Treu, Laura; Kougias, Panagiotis G; Campanaro, Stefano; Bassani, Ilaria; Angelidaki, Irini
2016-09-01
This research aimed to better characterize the biogas microbiome by means of high throughput metagenomic sequencing and to elucidate the core microbial consortium existing in biogas reactors independently from the operational conditions. Assembly of shotgun reads followed by an established binning strategy resulted in the highest, up to now, extraction of microbial genomes involved in biogas producing systems. From the 236 extracted genome bins, it was remarkably found that the vast majority of them could only be characterized at high taxonomic levels. This result confirms that the biogas microbiome is comprised by a consortium of unknown species. A comparative analysis between the genome bins of the current study and those extracted from a previous metagenomic assembly demonstrated a similar phylogenetic distribution of the main taxa. Finally, this analysis led to the identification of a subset of common microbes that could be considered as the core essential group in biogas production. Copyright © 2016 Elsevier Ltd. All rights reserved.
Visitor's Computer Guidelines Network Connection Request Instruments Instruments by Telescope IR Instruments telescope before SMARTS took over its operation. A permanently-mounted, dual-channel, optical-IR imager Consortium) with the optical detector since the 1998B semester. The IR array was installed in July 1999
Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S
2016-12-01
Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assembly: a resource for assembled genomes at NCBI
Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi
2016-01-01
The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580
Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John
2014-01-01
The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.
The role of expert searching in the Family Physicians' Inquiries Network (FPIN)*
Ward, Deborah; Meadows, Susan E.; Nashelsky, Joan E.
2005-01-01
Objective: This article describes the contributions of medical librarians, as members of the Family Physicians' Inquiries Network (FPIN), to the creation of a database of clinical questions and answers that allows family physicians to practice evidence-based medicine using high-quality information at the point of care. The medical librarians have contributed their evidence-based search expertise and knowledge of information systems that support the processes and output of the consortium. Methods: Since its inception, librarians have been included as valued members of the FPIN community. FPIN recognizes the search expertise of librarians, and each FPIN librarian must meet qualifications demonstrating appropriate experience and training in evidence-based medicine. The consortium works collaboratively to produce the Clinical Inquiries series published in family medicine publications. Results: Over 170 Clinical Inquiries have appeared in Journal of Family Practice (JFP) and American Family Physician (AFP). Surveys have shown that this series has become the most widely read part of the JFP Website. As a result, FPIN has formalized specific librarian roles that have helped build the organizational infrastructure. Conclusions: All of the activities of the consortium are highly collaborative, and the librarian community reflects that. The FPIN librarians are valuable and equal contributors to the process of creating, updating, and maintaining high-quality clinical information for practicing primary care physicians. Of particular value is the skill of expert searching that the librarians bring to FPIN's products. PMID:15685280
Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P
2016-08-05
Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.
Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.
Joseph, Jacob R; Smith, Brandon W; Park, Paul
2017-01-01
National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall
2017-01-01
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
CHRONIS: an animal chromosome image database.
Toyabe, Shin-Ichi; Akazawa, Kouhei; Fukushi, Daisuke; Fukui, Kiichi; Ushiki, Tatsuo
2005-01-01
We have constructed a database system named CHRONIS (CHROmosome and Nano-Information System) to collect images of animal chromosomes and related nanotechnological information. CHRONIS enables rapid sharing of information on chromosome research among cell biologists and researchers in other fields via the Internet. CHRONIS is also intended to serve as a liaison tool for researchers who work in different centers. The image database contains more than 3,000 color microscopic images, including karyotypic images obtained from more than 1,000 species of animals. Researchers can browse the contents of the database using a usual World Wide Web interface in the following URL: http://chromosome.med.niigata-u.ac.jp/chronis/servlet/chronisservlet. The system enables users to input new images into the database, to locate images of interest by keyword searches, and to display the images with detailed information. CHRONIS has a wide range of applications, such as searching for appropriate probes for fluorescent in situ hybridization, comparing various kinds of microscopic images of a single species, and finding researchers working in the same field of interest.
Content Based Image Matching for Planetary Science
NASA Astrophysics Data System (ADS)
Deans, M. C.; Meyer, C.
2006-12-01
Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct matches can also be confirmed by verifying MI images taken in the same z-stack, or MOC image tiles taken from the same image strip. False negatives are difficult to quantify as it would mean finding matches in the database of thousands of images that the algorithm did not detect.
The Magnetics Information Consortium (MagIC)
NASA Astrophysics Data System (ADS)
Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.
2003-12-01
The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.
Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues
NASA Astrophysics Data System (ADS)
Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen
2014-06-01
Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.
NASA Astrophysics Data System (ADS)
Ramachandran S., Sindhu; George, Jose; Skaria, Shibon; V. V., Varun
2018-02-01
Lung cancer is the leading cause of cancer related deaths in the world. The survival rate can be improved if the presence of lung nodules are detected early. This has also led to more focus being given to computer aided detection (CAD) and diagnosis of lung nodules. The arbitrariness of shape, size and texture of lung nodules is a challenge to be faced when developing these detection systems. In the proposed work we use convolutional neural networks to learn the features for nodule detection, replacing the traditional method of handcrafting features like geometric shape or texture. Our network uses the DetectNet architecture based on YOLO (You Only Look Once) to detect the nodules in CT scans of lung. In this architecture, object detection is treated as a regression problem with a single convolutional network simultaneously predicting multiple bounding boxes and class probabilities for those boxes. By performing training using chest CT scans from Lung Image Database Consortium (LIDC), NVIDIA DIGITS and Caffe deep learning framework, we show that nodule detection using this single neural network can result in reasonably low false positive rates with high sensitivity and precision.
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
GlycomeDB – integration of open-access carbohydrate structure databases
Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm
2008-01-01
Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830
Euclid Mission: Mapping the Geometry of the Dark Universe. Mission and Consortium Status
NASA Technical Reports Server (NTRS)
Rhodes, Jason
2011-01-01
Euclid concept: (1) High-precision survey mission to map the geometry of the Dark Universe (2) Optimized for two complementary cosmological probes: (2a) Weak Gravitational Lensing (2b) Baryonic Acoustic Oscillations (2c) Additional probes: clusters, redshift space distortions, ISW (3) Full extragalactic sky survey with 1.2m telescope at L2: (3a) Imaging: (3a-1) High precision imaging at visible wavelengths (3a-2) Photometry/Imaging in the near-infrared (3b) Near Infrared Spectroscopy (4) Synergy with ground based surveys (5) Legacy science for a wide range of in astronomy
Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.
Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio
2009-12-01
Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.
Photovoltaic Manufacturing Consortium (PVMC) – Enabling America’s Solar Revolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metacarpa, David
The U.S. Photovoltaic Manufacturing Consortium (US-PVMC) is an industry-led consortium which was created with the mission to accelerate the research, development, manufacturing, field testing, commercialization, and deployment of next-generation solar photovoltaic technologies. Formed as part of the U.S. Department of Energy's (DOE) SunShot initiative, and headquartered in New York State, PVMC is managed by the State University of New York Polytechnic Institute (SUNY Poly) at the Colleges of Nanoscale Science and Engineering. PVMC is a hybrid of industry-led consortium and manufacturing development facility, with capabilities for collaborative and proprietary industry engagement. Through its technology development programs, advanced manufacturing development facilities,more » system demonstrations, and reliability and testing capabilities, PVMC has demonstrated itself to be a recognized proving ground for innovative solar technologies and system designs. PVMC comprises multiple locations, with the core manufacturing and deployment support activities conducted at the Solar Energy Development Center (SEDC), and the core Si wafering and metrology technologies being headed out of the University of Central Florida. The SEDC provides a pilot line for proof-of-concept prototyping, offering critical opportunities to demonstrate emerging concepts in PV manufacturing, such as evaluations of innovative materials, system components, and PV system designs. The facility, located in Halfmoon NY, encompasses 40,000 square feet of dedicated PV development space. The infrastructure and capabilities housed at PVMC includes PV system level testing at the Prototype Demonstration Facility (PDF), manufacturing scale cell & module fabrication at the Manufacturing Development Facility (MDF), cell and module testing, reliability equipment on its PV pilot line, all integrated with a PV performance database and analytical characterizations for PVMC and its partners test and commercial arrays. Additional development and deployment support are also housed at the SEDC, such as cost modeling and cost model based development activities for PV and thin film modules, components, and system level designs for reduced LCOE through lower installation hardware costs, labor reductions, soft costs and reduced operations and maintenance costs. The progression of the consortium activities started with infrastructure and capabilities build out focused on CIGS thin film photovoltaics, with a particular focus on flexible cell and module production. As marketplace changes and partners objectives shifted, the consortium shifted heavily towards deployment and market pull activities including Balance of System, cost modeling, and installation cost reduction efforts along with impacts to performance and DER operational costs. The consortium consisted of a wide array of PV supply chain companies from equipment and component suppliers through national developers and installers with a particular focus on commercial scale deployments (typically 25 to 2MW installations). With DOE funding ending after the fifth budget period, the advantages and disadvantages of such a consortium is detailed along with potential avenues for self-sustainability is reviewed.« less
National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).
Significance of genome-wide association studies in molecular anthropology.
Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal
2009-12-01
The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.
Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine.
Elsik, Christine G; Tayal, Aditi; Diesh, Colin M; Unni, Deepak R; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E
2016-01-04
We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.
Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich
2017-08-01
Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.
Wolf, Dominik; Bocchetta, Martina; Preboske, Gregory M; Boccardi, Marina; Grothe, Michel J
2017-08-01
A harmonized protocol (HarP) for manual hippocampal segmentation on magnetic resonance imaging (MRI) has recently been developed by an international European Alzheimer's Disease Consortium-Alzheimer's Disease Neuroimaging Initiative project. We aimed at providing consensual certified HarP hippocampal labels in Montreal Neurological Institute (MNI) standard space to serve as reference in automated image analyses. Manual HarP tracings on the high-resolution MNI152 standard space template of four expert certified HarP tracers were combined to obtain consensual bilateral hippocampus labels. Utility and validity of these reference labels is demonstrated in a simple atlas-based morphometry approach for automated calculation of HarP-compliant hippocampal volumes within SPM software. Individual tracings showed very high agreement among the four expert tracers (pairwise Jaccard indices 0.82-0.87). Automatically calculated hippocampal volumes were highly correlated (r L/R = 0.89/0.91) with gold standard volumes in the HarP benchmark data set (N = 135 MRIs), with a mean volume difference of 9% (standard deviation 7%). The consensual HarP hippocampus labels in the MNI152 template can serve as a reference standard for automated image analyses involving MNI standard space normalization. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
World Wide Web and Internet: applications for radiologists.
Wunderbaldinger, P; Schima, W; Turetschek, K; Helbich, T H; Bankier, A A; Herold, C J
1999-01-01
Global exchange of information is one of the major sources of scientific progress in medicine. For management of the rapidly growing body of medical information, computers and their applications have become an indispensable scientific tool. Approximately 36 million computer users are part of a worldwide network called the Internet or "information highway" and have created a new infrastructure to promote rapid and efficient access to medical, and thus also to radiological, information. With the establishment of the World Wide Web (WWW) by a consortium of computer users who used a standardized, nonproprietary syntax termed HyperText Markup Language (HTML) for composing documents, it has become possible to provide interactive multimedia presentations to a wide audience. The extensive use of images in radiology makes education, worldwide consultation (review) and scientific presentation via the Internet a major beneficiary of this technical development. This is possible, since both information (text) as well as medical images can be transported via the Internet. Presently, the Internet offers an extensive database for radiologists. Since many radiologists and physicians have to be considered "Internet novices" and, hence, cannot yet avail themselves of the broad spectrum of the Internet, the aim of this article is to present a general introduction to the WWW/Internet and its applications for radiologists. All Internet sites mentioned in this article can be found at the following Internet address: http://www.univie.ac. at/radio/radio.html (Department of Radiology, University of Vienna)
2012-06-10
and white light) and the longitudinal extent of the SEP event in the heliosphere. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION...The STEREO SECCHI data are pro- duced by a consortium of RAL (UK), NRL (USA), LMSAL (USA), GSFC (USA), MPS (Germany), CSL (Belgium), IOTA (France
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
NASA Astrophysics Data System (ADS)
Nault, Kristie A.; Brucker, Melissa J.; Hammergren, Mark; Gyuk, Geza; Solontoi, Mike R.
2015-11-01
We present astrometric results of near-Earth objects (NEOs) targeted in fourth quarter 2014 and in 2015. This is part of Adler Planetarium’s NEO characterization and astrometric follow-up program, which uses the Astrophysical Research Consortium (ARC) 3.5-m telescope at Apache Point Observatory (APO). The program utilizes a 17% share of telescope time, amounting to a total of 500 hours per year. This time is divided up into two hour observing runs approximately every other night for astrometry and frequent half-night runs approximately several times a month for spectroscopy (see poster by M. Hammergren et. al.) and light curve studies (see poster by M. J. Brucker et. al.).Observations were made using Seaver Prototype Imaging Camera (SPIcam), a visible-wavelength, direct imaging CCD camera with 2048 x 2048 pixels and a field of view of 4.78’ x 4.78’. Observations were made using 2 x 2 binning.Special emphasis has been made to focus on the smallest NEOs, particularly around 140m in diameter. Targets were selected based on absolute magnitude (prioritizing for those with H > 25 mag to select small objects) and a 3σ uncertainty less than 400” to ensure that the target is in the FOV. Targets were drawn from the Minor Planet Center (MPC) NEA Observing Planning Aid, the JPL What’s Observable tool, and the Spaceguard priority list and faint NEO list.As of August 2015, we have detected 670 NEOs for astrometric follow-up, on point with our goal of providing astrometry on a thousand NEOs per year. Astrometric calculations were done using the interactive software tool Astrometrica, which is used for data reduction focusing on the minor bodies of the solar system. The program includes automatic reference star identification from new-generation star catalogs, access to the complete MPC database of orbital elements, and automatic moving object detection and identification.This work is based on observations done using the 3.5-m telescope at Apache Point Observatory, owned and operated by the Astrophysical Research Consortium. We acknowledge the support from the NASA NEOO award NNX14AL17G and thank the University of Chicago Astronomy and Astrophysics Department for observing time in 2014.
NASA Astrophysics Data System (ADS)
Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.; Genevey, A.; Delaney, R.; Baker, P.; Sbarbori, E.
2005-12-01
The Magnetics Information Consortium (MagIC) operates an online relational database including both rock and paleomagnetic data. The goal of MagIC is to store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. These nodes provide basic search capabilities based on location, reference, methods applied, material type and geological age, while allowing the user to drill down from sites all the way to the measurements. At each stage, the data can be saved and, if the available data supports it, the data can be visualized by plotting equal area plots, VGP location maps or typical Zijderveld, hysteresis, FORC, and various magnetization and remanence diagrams. All plots are made in SVG (scalable vector graphics) and thus can be saved and easily read into the user's favorite graphics programs without loss of resolution. User contributions to the MagIC database are critical to achieve a useful research tool. We have developed a standard data and metadata template (version 1.6) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate easy population of these templates within Microsoft Excel. These tools allow for the import/export of text files and they provide advanced functionality to manage/edit the data, and to perform various internal checks to high grade the data and to make them ready for uploading. The uploading is all done online by using the MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm that takes only a few minutes to process a contribution of approximately 5,000 data records. After uploading these standardized MagIC template files will be stored in the digital archives of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. In addition to that a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include ~40 papers from studies on the Hawaiian Islands, data compilations from archeomagnetic studies and updates to the lake sediment dataset.
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
DeMartini, Wendy B; Ichikawa, Laura; Yankaskas, Bonnie C; Buist, Diana; Kerlikowske, Karla; Geller, Berta; Onega, Tracy; Rosenberg, Robert D; Lehman, Constance D
2010-11-01
MRI is increasingly used for the detection of breast carcinoma. Little is known about breast MRI techniques among community practice facilities. The aim of this study was to evaluate equipment and acquisition techniques used by community facilities across the United States, including compliance with minimum standards by the ACRIN® 6667 Trial and the European Society of Breast Imaging. Breast Cancer Surveillance Consortium facilities performing breast MRI were identified and queried by survey regarding breast MRI equipment and technical parameters. Variables included scanner field strength, coil type, acquisition coverage, slice thickness, and the timing of the initial postcontrast sequence. Results were tallied and percentages of facilities meeting ACRIN® and European Society of Breast Imaging standards were calculated. From 23 facilities performing breast MRI, results were obtained from 14 (61%) facilities with 16 MRI scanners reporting 18 imaging parameters. Compliance with equipment recommendations of ≥1.5-T field strength was 94% and of a dedicated breast coil was 100%. Eighty-three percent of acquisitions used bilateral postcontrast techniques, and 78% used slice thickness≤3 mm. The timing of initial postcontrast sequences ranged from 58 seconds to 8 minutes 30 seconds, with 63% meeting recommendations for completion within 4 minutes. Nearly all surveyed facilities met ACRIN and European Society of Breast Imaging standards for breast MRI equipment. The majority met standards for acquisition parameters, although techniques varied, in particular for the timing of initial postcontrast imaging. Further guidelines by the ACR Breast MRI Accreditation Program will be of importance in facilitating standardized and high-quality breast MRI. Copyright © 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
Interhospital network system using the worldwide web and the common gateway interface.
Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S
1999-05-01
We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
The RECONS 25 Parsec Database: Who Are the Stars? Where Are the Planets?
NASA Astrophysics Data System (ADS)
Henry, Todd J.; Dieterich, S.; Hosey, A. D.; Ianna, P. A.; Jao, W.; Koerner, D. W.; Riedel, A. R.; Slatten, K. J.; Subasavage, J.; Winters, J. G.; RECONS
2013-01-01
Since 1994, RECONS (www.recons.org, REsearch Consortium On Nearby Stars) has been discovering and characterizing the Sun's neighbors. Nearby stars provide increased fluxes, larger astrometric perturbations, and higher probabilities for eventual resolution and detailed study of planets than similar stars at larger distances. Examination of the nearby stellar sample will reveal the prevalence and structure of solar systems, as well as the balance of Jovian and terrestrial worlds. These are the stars and planets that will ultimately be key in our search for life elsewhere. Here we outline what we know ... and what we don't know ... about the population of the nearest stars. We have expanded the original RECONS 10 pc horizon to 25 pc and are constructing a database that currently includes 2124 systems. By using the CTIO 0.9m telescope --- now operated by RECONS as part of the SMARTS Consortium --- we have published the first accurate parallaxes for 149 systems within 25 pc and currently have an additional 213 unpublished systems to add. Still, we predict that roughly two-thirds of the systems within 25 pc do not yet have accurate distance measurements. In addition to revealing the Sun's stellar neighbors, we have been using astrometric techniques to search for massive planets orbiting roughly 200 of the nearest red dwarfs. Unlike radial velocity searches, our astrometric effort is most sensitive to Jovian planets in Jovian orbits, i.e. those that span decades. We have now been monitoring stars for up to 13 years with positional accuracies of a few milliarcseconds per night. We have detected stellar and brown dwarf companions, as well as enigmatic, unseen secondaries, but have yet to reveal a single super-Jupiter ... a somewhat surprising result. In total, only 3% of stars within 25 pc are known to possess planets. It seems clear that we have a great deal of work to do to map out the stars, planets, and perhaps life in the solar neighborhood. This effort is supported by the NSF through grant AST-0908402 and via observations made possible by the SMARTS Consortium.
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.
2017-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.
WE-E-BRB-11: Riview a Web-Based Viewer for Radiotherapy.
Apte, A; Wang, Y; Deasy, J
2012-06-01
Collaborations involving radiotherapy data collection, such as the recently proposed international radiogenomics consortium, require robust, web-based tools to facilitate reviewing treatment planning information. We present the architecture and prototype characteristics for a web-based radiotherapy viewer. The web-based environment developed in this work consists of the following components: 1) Import of DICOM/RTOG data: CERR was leveraged to import DICOM/RTOG data and to convert to database friendly RT objects. 2) Extraction and Storage of RT objects: The scan and dose distributions were stored as .png files per slice and view plane. The file locations were written to the MySQL database. Structure contours and DVH curves were written to the database as numeric data. 3) Web interfaces to query, retrieve and visualize the RT objects: The Web application was developed using HTML 5 and Ruby on Rails (RoR) technology following the MVC philosophy. The open source ImageMagick library was utilized to overlay scan, dose and structures. The application allows users to (i) QA the treatment plans associated with a study, (ii) Query and Retrieve patients matching anonymized ID and study, (iii) Review up to 4 plans simultaneously in 4 window panes (iv) Plot DVH curves for the selected structures and dose distributions. A subset of data for lung cancer patients was used to prototype the system. Five user accounts were created to have access to this study. The scans, doses, structures and DVHs for 10 patients were made available via the web application. A web-based system to facilitate QA, and support Query, Retrieve and the Visualization of RT data was prototyped. The RIVIEW system was developed using open source and free technology like MySQL and RoR. We plan to extend the RIVIEW system further to be useful in clinical trial data collection, outcomes research, cohort plan review and evaluation. © 2012 American Association of Physicists in Medicine.
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
Accrediting osteopathic postdoctoral training institutions.
Duffy, Thomas
2011-04-01
All postdoctoral training programs approved by the American Osteopathic Association are required to be part of an Osteopathic Postdoctoral Training Institution (OPTI) consortium. The author reviews recent activities related to OPTI operations, including the transfer the OPTI Annual Report to an electronic database, revisions to the OPTI Accreditation Handbook, training at the 2010 OPTI Workshop, and new requirements of the American Osteopathic Association Commission on Osteopathic College Accreditation. The author also reviews the OPTI accreditation process, cites common commendations and deficiencies for reviews completed from 2008 to 2010, and provides an overview of plans for future improvements.
MODBASE, a database of annotated comparative protein structure models
Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej
2002-01-01
MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
Enhancing AFLOW Visualization using Jmol
NASA Astrophysics Data System (ADS)
Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration
The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings
Image query and indexing for digital x rays
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Thoma, George R.
1998-12-01
The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The toolmore » is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.« less
Javid, Patrick J; Oron, Assaf P; Duggan, Christopher; Squires, Robert H; Horslen, Simon P
2017-09-05
The advent of regional multidisciplinary intestinal rehabilitation programs has been associated with improved survival in pediatric intestinal failure. Yet, the optimal timing of referral for intestinal rehabilitation remains unknown. We hypothesized that the degree of intestinal failure-associated liver disease (IFALD) at initiation of intestinal rehabilitation would be associated with overall outcome. The multicenter, retrospective Pediatric Intestinal Failure Consortium (PIFCon) database was used to identify all subjects with baseline bilirubin data. Conjugated bilirubin (CBili) was used as a marker for IFALD, and we stratified baseline bilirubin values as CBili<2 mg/dL, CBili 2-4 mg/dL, and CBili>4 mg/dL. The association between baseline CBili and mortality was examined using Cox proportional hazards regression. Of 272 subjects in the database, 191 (70%) children had baseline bilirubin data collected. 38% and 28% of patients had CBili >4 mg/dL and CBili <2 mg/dL, respectively, at baseline. All-cause mortality was 23%. On univariate analysis, mortality was associated with CBili 2-4 mg/dL, CBili >4 mg/dL, prematurity, race, and small bowel atresia. On regression analysis controlling for age, prematurity, and diagnosis, the risk of mortality was increased by 3-fold for baseline CBili 2-4 mg/dL (HR 3.25 [1.07-9.92], p=0.04) and 4-fold for baseline CBili >4 mg/dL (HR 4.24 [1.51-11.92], p=0.006). On secondary analysis, CBili >4 mg/dL at baseline was associated with a lower chance of attaining enteral autonomy. In children with intestinal failure treated at intestinal rehabilitation programs, more advanced IFALD at referral is associated with increased mortality and decreased prospect of attaining enteral autonomy. Early referral of children with intestinal failure to intestinal rehabilitation programs should be strongly encouraged. Treatment Study, Level III. Copyright © 2017 Elsevier Inc. All rights reserved.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-01-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685
Construction of image database for newspapaer articles using CTS
NASA Astrophysics Data System (ADS)
Kamio, Tatsuo
Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.
Molecular Imaging and Contrast Agent Database (MICAD): evolution and progress.
Chopra, Arvind; Shan, Liang; Eckelman, W C; Leung, Kam; Latterner, Martin; Bryant, Stephen H; Menkens, Anne
2012-02-01
The purpose of writing this review is to showcase the Molecular Imaging and Contrast Agent Database (MICAD; www.micad.nlm.nih.gov ) to students, researchers, and clinical investigators interested in the different aspects of molecular imaging. This database provides freely accessible, current, online scientific information regarding molecular imaging (MI) probes and contrast agents (CA) used for positron emission tomography, single-photon emission computed tomography, magnetic resonance imaging, X-ray/computed tomography, optical imaging and ultrasound imaging. Detailed information on >1,000 agents in MICAD is provided in a chapter format and can be accessed through PubMed. Lists containing >4,250 unique MI probes and CAs published in peer-reviewed journals and agents approved by the United States Food and Drug Administration as well as a comma separated values file summarizing all chapters in the database can be downloaded from the MICAD homepage. Users can search for agents in MICAD on the basis of imaging modality, source of signal/contrast, agent or target category, pre-clinical or clinical studies, and text words. Chapters in MICAD describe the chemical characteristics (structures linked to PubChem), the in vitro and in vivo activities, and other relevant information regarding an imaging agent. All references in the chapters have links to PubMed. A Supplemental Information Section in each chapter is available to share unpublished information regarding an agent. A Guest Author Program is available to facilitate rapid expansion of the database. Members of the imaging community registered with MICAD periodically receive an e-mail announcement (eAnnouncement) that lists new chapters uploaded to the database. Users of MICAD are encouraged to provide feedback, comments, or suggestions for further improvement of the database by writing to the editors at micad@nlm.nih.gov.
Molecular Imaging and Contrast Agent Database (MICAD): Evolution and Progress
Chopra, Arvind; Shan, Liang; Eckelman, W. C.; Leung, Kam; Latterner, Martin; Bryant, Stephen H.; Menkens, Anne
2011-01-01
The purpose of writing this review is to showcase the Molecular Imaging and Contrast Agent Database (MICAD; www.micad.nlm.nih.gov) to students, researchers and clinical investigators interested in the different aspects of molecular imaging. This database provides freely accessible, current, online scientific information regarding molecular imaging (MI) probes and contrast agents (CA) used for positron emission tomography, single-photon emission computed tomography, magnetic resonance imaging, x-ray/computed tomography, optical imaging and ultrasound imaging. Detailed information on >1000 agents in MICAD is provided in a chapter format and can be accessed through PubMed. Lists containing >4250 unique MI probes and CAs published in peer-reviewed journals and agents approved by the United States Food and Drug Administration (FDA) as well as a CSV file summarizing all chapters in the database can be downloaded from the MICAD homepage. Users can search for agents in MICAD on the basis of imaging modality, source of signal/contrast, agent or target category, preclinical or clinical studies, and text words. Chapters in MICAD describe the chemical characteristics (structures linked to PubChem), the in vitro and in vivo activities and other relevant information regarding an imaging agent. All references in the chapters have links to PubMed. A Supplemental Information Section in each chapter is available to share unpublished information regarding an agent. A Guest Author Program is available to facilitate rapid expansion of the database. Members of the imaging community registered with MICAD periodically receive an e-mail announcement (eAnnouncement) that lists new chapters uploaded to the database. Users of MICAD are encouraged to provide feedback, comments or suggestions for further improvement of the database by writing to the editors at: micad@nlm.nih.gov PMID:21989943
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.
Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A
2010-08-01
The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.
Enriching public descriptions of marine phages using the Genomic Standards Consortium MIGS standard
Duhaime, Melissa Beth; Kottmann, Renzo; Field, Dawn; Glöckner, Frank Oliver
2011-01-01
In any sequencing project, the possible depth of comparative analysis is determined largely by the amount and quality of the accompanying contextual data. The structure, content, and storage of this contextual data should be standardized to ensure consistent coverage of all sequenced entities and facilitate comparisons. The Genomic Standards Consortium (GSC) has developed the “Minimum Information about Genome/Metagenome Sequences (MIGS/MIMS)” checklist for the description of genomes and here we annotate all 30 publicly available marine bacteriophage sequences to the MIGS standard. These annotations build on existing International Nucleotide Sequence Database Collaboration (INSDC) records, and confirm, as expected that current submissions lack most MIGS fields. MIGS fields were manually curated from the literature and placed in XML format as specified by the Genomic Contextual Data Markup Language (GCDML). These “machine-readable” reports were then analyzed to highlight patterns describing this collection of genomes. Completed reports are provided in GCDML. This work represents one step towards the annotation of our complete collection of genome sequences and shows the utility of capturing richer metadata along with raw sequences. PMID:21677864
Hoffman, James M; Dunnenberger, Henry M; Kevin Hicks, J; Caudle, Kelly E; Whirl Carrillo, Michelle; Freimuth, Robert R; Williams, Marc S; Klein, Teri E; Peterson, Josh F
2016-07-01
To move beyond a select few genes/drugs, the successful adoption of pharmacogenomics into routine clinical care requires a curated and machine-readable database of pharmacogenomic knowledge suitable for use in an electronic health record (EHR) with clinical decision support (CDS). Recognizing that EHR vendors do not yet provide a standard set of CDS functions for pharmacogenetics, the Clinical Pharmacogenetics Implementation Consortium (CPIC) Informatics Working Group is developing and systematically incorporating a set of EHR-agnostic implementation resources into all CPIC guidelines. These resources illustrate how to integrate pharmacogenomic test results in clinical information systems with CDS to facilitate the use of patient genomic data at the point of care. Based on our collective experience creating existing CPIC resources and implementing pharmacogenomics at our practice sites, we outline principles to define the key features of future knowledge bases and discuss the importance of these knowledge resources for pharmacogenomics and ultimately precision medicine. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
1999-01-01
the previous sortie. 47 Portable Maintenance Data Store (PMDS). The PMDS is a solid state memory device approximately the same size as an...Engine Titanium Consortium Portable Scanner Bridging the gap between rapid imaging and robots is a class of inspection devices called scanners...patterns on the image to change . Thus, a region approximately the size of the sensor element (3 inches in diameter) can be inspected at one time, and an
A database for spectral image quality
NASA Astrophysics Data System (ADS)
Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve
2015-01-01
We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.
Document image database indexing with pictorial dictionary
NASA Astrophysics Data System (ADS)
Akbari, Mohammad; Azimi, Reza
2010-02-01
In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.
Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina
2009-01-01
Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
Image database for digital hand atlas
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente; Dey, Partha S.; Gertych, Arkadiusz; Pospiech-Kurkowska, Sywia
2003-05-01
Bone age assessment is a procedure frequently performed in pediatric patients to evaluate their growth disorder. A commonly used method is atlas matching by a visual comparison of a hand radiograph with a small reference set of old Greulich-Pyle atlas. We have developed a new digital hand atlas with a large set of clinically normal hand images of diverse ethnic groups. In this paper, we will present our system design and implementation of the digital atlas database to support the computer-aided atlas matching for bone age assessment. The system consists of a hand atlas image database, a computer-aided diagnostic (CAD) software module for image processing and atlas matching, and a Web user interface. Users can use a Web browser to push DICOM images, directly or indirectly from PACS, to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, are then extracted and compared with patterns from the atlas image database to assess the bone age. The digital atlas method built on a large image database and current Internet technology provides an alternative to supplement or replace the traditional one for a quantitative, accurate and cost-effective assessment of bone age.
Han, Guanghui; Liu, Xiabi; Han, Feifei; Santika, I Nyoman Tenaya; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu
2015-02-01
Lung computed tomography (CT) imaging signs play important roles in the diagnosis of lung diseases. In this paper, we review the significance of CT imaging signs in disease diagnosis and determine the inclusion criterion of CT scans and CT imaging signs of our database. We develop the software of abnormal regions annotation and design the storage scheme of CT images and annotation data. Then, we present a publicly available database of lung CT imaging signs, called LISS for short, which contains 271 CT scans and 677 abnormal regions in them. The 677 abnormal regions are divided into nine categories of common CT imaging signs of lung disease (CISLs). The ground truth of these CISLs regions and the corresponding categories are provided. Furthermore, to make the database publicly available, all private data in CT scans are eliminated or replaced with provisioned values. The main characteristic of our LISS database is that it is developed from a new perspective of CT imaging signs of lung diseases instead of commonly considered lung nodules. Thus, it is promising to apply to computer-aided detection and diagnosis research and medical education.
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Active browsing using similarity pyramids
NASA Astrophysics Data System (ADS)
Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.
1998-12-01
In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Mission Connect Mild TBI Translational Research Consortium
2012-08-01
as they relate to functional outcome. At 6 months post injury, patients will be screened for anterior pituitary function 121 subjects have been...are indicative of anterior pituitary function, including somatomedin (IGF-1), thyroid stimulating hormone (TSH), thyroxine (Free T4), prolactin, and...incidence of single and multiple pituitary hormone deficiencies. The clinical characteristics, MRI imaging results, EEG and MEG results of the
Legal Medicine Information System using CDISC ODM.
Kiuchi, Takahiro; Yoshida, Ken-ichi; Kotani, Hirokazu; Tamaki, Keiji; Nagai, Hisashi; Harada, Kazuki; Ishikawa, Hirono
2013-11-01
We have developed a new database system for forensic autopsies, called the Legal Medicine Information System, using the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM). This system comprises two subsystems, namely the Institutional Database System (IDS) located in each institute and containing personal information, and the Central Anonymous Database System (CADS) located in the University Hospital Medical Information Network Center containing only anonymous information. CDISC ODM is used as the data transfer protocol between the two subsystems. Using the IDS, forensic pathologists and other staff can register and search for institutional autopsy information, print death certificates, and extract data for statistical analysis. They can also submit anonymous autopsy information to the CADS semi-automatically. This reduces the burden of double data entry, the time-lag of central data collection, and anxiety regarding legal and ethical issues. Using the CADS, various studies on the causes of death can be conducted quickly and easily, and the results can be used to prevent similar accidents, diseases, and abuse. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brazil, L.
2017-12-01
The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through open source tools that are developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through collection efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, CUAHSI-supported data tools have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing educational material, and the resources available to learn more.
Kessler, Michael D.; Yerges-Armstrong, Laura; Taub, Margaret A.; Shetty, Amol C.; Maloney, Kristin; Jeng, Linda Jo Bone; Ruczinski, Ingo; Levin, Albert M.; Williams, L. Keoki; Beaty, Terri H.; Mathias, Rasika A.; Barnes, Kathleen C.; Boorgula, Meher Preethi; Campbell, Monica; Chavan, Sameer; Ford, Jean G.; Foster, Cassandra; Gao, Li; Hansel, Nadia N.; Horowitz, Edward; Huang, Lili; Ortiz, Romina; Potee, Joseph; Rafaels, Nicholas; Scott, Alan F.; Vergara, Candelaria; Gao, Jingjing; Hu, Yijuan; Johnston, Henry Richard; Qin, Zhaohui S.; Padhukasahasram, Badri; Dunston, Georgia M.; Faruque, Mezbah U.; Kenny, Eimear E.; Gietzen, Kimberly; Hansen, Mark; Genuario, Rob; Bullis, Dave; Lawley, Cindy; Deshpande, Aniket; Grus, Wendy E.; Locke, Devin P.; Foreman, Marilyn G.; Avila, Pedro C.; Grammer, Leslie; Kim, Kwang-YounA; Kumar, Rajesh; Schleimer, Robert; Bustamante, Carlos; De La Vega, Francisco M.; Gignoux, Chris R.; Shringarpure, Suyash S.; Musharoff, Shaila; Wojcik, Genevieve; Burchard, Esteban G.; Eng, Celeste; Gourraud, Pierre-Antoine; Hernandez, Ryan D.; Lizee, Antoine; Pino-Yanes, Maria; Torgerson, Dara G.; Szpiech, Zachary A.; Torres, Raul; Nicolae, Dan L.; Ober, Carole; Olopade, Christopher O.; Olopade, Olufunmilayo; Oluwole, Oluwafemi; Arinola, Ganiyu; Song, Wei; Abecasis, Goncalo; Correa, Adolfo; Musani, Solomon; Wilson, James G.; Lange, Leslie A.; Akey, Joshua; Bamshad, Michael; Chong, Jessica; Fu, Wenqing; Nickerson, Deborah; Reiner, Alexander; Hartert, Tina; Ware, Lorraine B.; Bleecker, Eugene; Meyers, Deborah; Ortega, Victor E.; Pissamai, Maul R. N.; Trevor, Maul R. N.; Watson, Harold; Araujo, Maria Ilma; Oliveira, Ricardo Riccio; Caraballo, Luis; Marrugo, Javier; Martinez, Beatriz; Meza, Catherine; Ayestas, Gerardo; Herrera-Paz, Edwin Francisco; Landaverde-Torres, Pamela; Erazo, Said Omar Leiva; Martinez, Rosella; Mayorga, Alvaro; Mayorga, Luis F.; Mejia-Mejia, Delmy-Aracely; Ramos, Hector; Saenz, Allan; Varela, Gloria; Vasquez, Olga Marina; Ferguson, Trevor; Knight-Madden, Jennifer; Samms-Vaughan, Maureen; Wilks, Rainford J.; Adegnika, Akim; Ateba-Ngoa, Ulysse; Yazdanbakhsh, Maria; O'Connor, Timothy D.
2016-01-01
To characterize the extent and impact of ancestry-related biases in precision genomic medicine, we use 642 whole-genome sequences from the Consortium on Asthma among African-ancestry Populations in the Americas (CAAPA) project to evaluate typical filters and databases. We find significant correlations between estimated African ancestry proportions and the number of variants per individual in all variant classification sets but one. The source of these correlations is highlighted in more detail by looking at the interaction between filtering criteria and the ClinVar and Human Gene Mutation databases. ClinVar's correlation, representing African ancestry-related bias, has changed over time amidst monthly updates, with the most extreme switch happening between March and April of 2014 (r=0.733 to r=−0.683). We identify 68 SNPs as the major drivers of this change in correlation. As long as ancestry-related bias when using these clinical databases is minimally recognized, the genetics community will face challenges with implementation, interpretation and cost-effectiveness when treating minority populations. PMID:27725664
MetaboLights: towards a new COSMOS of metabolomics data management.
Steinbeck, Christoph; Conesa, Pablo; Haug, Kenneth; Mahendraker, Tejasvi; Williams, Mark; Maguire, Eamonn; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Salek, Reza M; Griffin, Julian L
2012-10-01
Exciting funding initiatives are emerging in Europe and the US for metabolomics data production, storage, dissemination and analysis. This is based on a rich ecosystem of resources around the world, which has been build during the past ten years, including but not limited to resources such as MassBank in Japan and the Human Metabolome Database in Canada. Now, the European Bioinformatics Institute has launched MetaboLights, a database for metabolomics experiments and the associated metadata (http://www.ebi.ac.uk/metabolights). It is the first comprehensive, cross-species, cross-platform metabolomics database maintained by one of the major open access data providers in molecular biology. In October, the European COSMOS consortium will start its work on Metabolomics data standardization, publication and dissemination workflows. The NIH in the US is establishing 6-8 metabolomics services cores as well as a national metabolomics repository. This communication reports about MetaboLights as a new resource for Metabolomics research, summarises the related developments and outlines how they may consolidate the knowledge management in this third large omics field next to proteomics and genomics.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
Low dose CT image restoration using a database of image patches
NASA Astrophysics Data System (ADS)
Ha, Sungsoo; Mueller, Klaus
2015-01-01
Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.
ClearedLeavesDB: an online database of cleared plant leaf images
2014-01-01
Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985
ClearedLeavesDB: an online database of cleared plant leaf images.
Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S
2014-03-28
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
The readout and control system of the mid-size telescope prototype of the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Oya, I.; Anguner, O.; Behera, B.; Birsin, E.; Fuessling, M.; Melkumyan, D.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.; Cta Consortium,the
2014-06-01
The Cherenkov Telescope Array (CTA) is one of the major ground-based astronomy projects being pursued and will be the largest facility for ground-based y-ray observations ever built. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different type and size. A prototype for the Mid-Size Telescope (MST) with a diameter of 12 m has been installed in Berlin and is currently being commissioned. This prototype is composed of a mechanical structure, a drive system and mirror facets mounted with powered actuators to enable active control. Five Charge-Coupled Device (CCD) cameras, and a wide set of sensors allow the evaluation of the performance of the instrument. The design of the control software is following concepts and tools under evaluation within the CTA consortium in order to provide a realistic test-bed for the middleware: 1) The readout and control system for the MST prototype is implemented with the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) distributed control middleware; 2) the OPen Connectivity-Unified Architecture (OPC UA) is used for hardware access; 3) the document oriented MongoDB database is used for an efficient storage of CCD images, logging and alarm information: and 4) MySQL and MongoDB databases are used for archiving the slow control monitoring data and for storing the operation configuration parameters. In this contribution, the details of the implementation of the control system for the MST prototype telescope are described.
Biomedical science journals in the Arab world.
Tadmouri, Ghazi O
2004-10-01
Medieval Arab scientists established the basis of medical practice and gave important attention to the publication of scientific results. At present, modern scientific publishing in the Arab world is in its developmental stage. Arab biomedical journals are less than 300, most of which are published in Egypt, Lebanon, and the Kingdom of Saudi Arabia. Yet, many of these journals do not have on-line access or are indexed in major bibliographic databases. The majority of indexed journals, however, do not have a stable presence in the popular PubMed database and their indexes are discontinued since 2001. The exposure of Arab biomedical journals in international indices undoubtedly plays an important role in improving the scientific quality of these journals. The successful examples discussed in this review encourage us to call for the formation of a consortium of Arab biomedical journal publishers to assist in redressing the balance of the region from biomedical data consumption to data production.
Vizcaíno, Juan Antonio; Foster, Joseph M.; Martens, Lennart
2010-01-01
Despite the fact that data deposition is not a generalised fact yet in the field of proteomics, several mass spectrometry (MS) based proteomics repositories are publicly available for the scientific community. The main existing resources are: the Global Proteome Machine Database (GPMDB), PeptideAtlas, the PRoteomics IDEntifications database (PRIDE), Tranche, and NCBI Peptidome. In this review the capabilities of each of these will be described, paying special attention to four key properties: data types stored, applicable data submission strategies, supported formats, and available data mining and visualization tools. Additionally, the data contents from model organisms will be enumerated for each resource. There are other valuable smaller and/or more specialized repositories but they will not be covered in this review. Finally, the concept behind the ProteomeXchange consortium, a collaborative effort among the main resources in the field, will be introduced. PMID:20615486
NASA Astrophysics Data System (ADS)
Michel, L.; Motch, C.; Pineau, F. X.
2009-05-01
As members of the Survey Science Consortium of the XMM-Newton mission the Strasbourg Observatory is in charge of the real-time cross-correlations of X-ray data with archival catalogs. We also are committed to provide a specific tools to handle these cross-correlations and propose identifications at other wavelengths. In order to do so, we developed a database generator (Saada) managing persitent links and supporting heterogeneous input datasets. This system allows to easily build an archive containing numerous and complex links between individual items [1]. It also offers a powerfull query engine able to select sources on the basis of the properties (existence, distance, colours) of the X-ray-archival associations. We present such a database in operation for the 2XMMi catalogue. This system is flexible enough to provide both a public data interface and a servicing interface which could be used in the framework of the Simbol-X ground segment.
Huber, Lara
2011-06-01
In the neurosciences digital databases more and more are becoming important tools of data rendering and distributing. This development is due to the growing impact of imaging based trial design in cognitive neuroscience, including morphological as much as functional imaging technologies. As the case of the 'Laboratory of Neuro Imaging' (LONI) is showing, databases are attributed a specific epistemological power: Since the 1990s databasing is seen to foster the integration of neuroscientific data, although local regimes of data production, -manipulation and--interpretation are also challenging this development. Databasing in the neurosciences goes along with the introduction of new structures of integrating local data, hence establishing digital spaces of knowledge (epistemic spaces): At this stage, inherent norms of digital databases are affecting regimes of imaging-based trial design, for example clinical research into Alzheimer's disease.
Gao, Jun-Xue; Pei, Qiu-Yan; Li, Yun-Tao; Yang, Zhen-Juan
2015-06-01
The aim of this study was to create a database of anatomical ultrathin cross-sectional images of fetal hearts with different congenital heart diseases (CHDs) and preliminarily to investigate its clinical application. Forty Chinese fetal heart samples from induced labor due to different CHDs were cut transversely at 60-μm thickness. All thoracic organs were removed from the thoracic cavity after formalin fixation, embedded in optimum cutting temperature compound, and then frozen at -25°C for 2 hours. Subsequently, macro shots of the frozen serial sections were obtained using a digital camera in order to build a database of anatomical ultrathin cross-sectional images. Images in the database clearly displayed the fetal heart structures. After importing the images into three-dimensional software, the following functions could be realized: (1) based on the original database of transverse sections, databases of sagittal and coronal sections could be constructed; and (2) the original and constructed databases could be displayed continuously and dynamically, and rotated in arbitrary angles. They could also be displayed synchronically. The aforementioned functions of the database allowed for the retrieval of images and three-dimensional anatomy characteristics of the different fetal CHDs, and virtualization of fetal echocardiography findings. A database of 40 different cross-sectional fetal CHDs was established. An extensive database library of fetal CHDs, from which sonographers and students can study the anatomical features of fetal CHDs and virtualize fetal echocardiography findings via either centralized training or distance education, can be established in the future by accumulating further cases. Copyright © 2015. Published by Elsevier B.V.
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
NASA Technical Reports Server (NTRS)
Cota, Glenn F.
2001-01-01
The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.
NASA Astrophysics Data System (ADS)
Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.
2014-06-01
PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.
DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh
2014-10-01
The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Conversion of a traditional image archive into an image resource on compact disc.
Andrew, S M; Benbow, E W
1997-01-01
The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931
Quanbeck, Stephanie M.; Brachova, Libuse; Campbell, Alexis A.; Guan, Xin; Perera, Ann; He, Kun; Rhee, Seung Y.; Bais, Preeti; Dickerson, Julie A.; Dixon, Philip; Wohlgemuth, Gert; Fiehn, Oliver; Barkan, Lenore; Lange, Iris; Lange, B. Markus; Lee, Insuk; Cortes, Diego; Salazar, Carolina; Shuman, Joel; Shulaev, Vladimir; Huhman, David V.; Sumner, Lloyd W.; Roth, Mary R.; Welti, Ruth; Ilarslan, Hilal; Wurtele, Eve S.; Nikolau, Basil J.
2012-01-01
Metabolomics is the methodology that identifies and measures global pools of small molecules (of less than about 1,000 Da) of a biological sample, which are collectively called the metabolome. Metabolomics can therefore reveal the metabolic outcome of a genetic or environmental perturbation of a metabolic regulatory network, and thus provide insights into the structure and regulation of that network. Because of the chemical complexity of the metabolome and limitations associated with individual analytical platforms for determining the metabolome, it is currently difficult to capture the complete metabolome of an organism or tissue, which is in contrast to genomics and transcriptomics. This paper describes the analysis of Arabidopsis metabolomics data sets acquired by a consortium that includes five analytical laboratories, bioinformaticists, and biostatisticians, which aims to develop and validate metabolomics as a hypothesis-generating functional genomics tool. The consortium is determining the metabolomes of Arabidopsis T-DNA mutant stocks, grown in standardized controlled environment optimized to minimize environmental impacts on the metabolomes. Metabolomics data were generated with seven analytical platforms, and the combined data is being provided to the research community to formulate initial hypotheses about genes of unknown function (GUFs). A public database (www.PlantMetabolomics.org) has been developed to provide the scientific community with access to the data along with tools to allow for its interactive analysis. Exemplary datasets are discussed to validate the approach, which illustrate how initial hypotheses can be generated from the consortium-produced metabolomics data, integrated with prior knowledge to provide a testable hypothesis concerning the functionality of GUFs. PMID:22645570
DeLisa, J A; Jain, S S; Kirshblum, S
1998-01-01
Decision makers at the federal and state level are considering, and some states have enacted, a reduction in total United States residency positions, a shift in emphasis from specialist to generalist training, a need for programs to join together in training consortia to determine local residency position allocation strategy, a reduction in funding of international medical graduates, and a reduction in funding beyond the first certificate or a total of five years. A 5-page, 24-item questionnaire was sent to all physiatry residency training directors. The objective was to discern a descriptive database of physiatry training programs and how their institutions might respond to cuts in graduate medical education funding. Fifty-eight (73%) of the questionnaires were returned. Most training directors believe that their primary mission is to train general physiatrists and, to a much lesser extent, to train subspecialty or research fellows. Directors were asked how they might handle reductions in house staff such as using physician extenders, shifting clinical workload to faculty, hiring additional faculty, and funding physiatry residents from practice plans and endowments. Physiatry has had little experience (29%; 17/58) with voluntary graduate medical education consortiums, but most (67%; 34/58) seem to feel that if a consortium system is mandated, they would favor a local or regional over a national body because they do not believe the specialty has a strong enough national stature. The major barriers to a consortium for graduate medical education allocation were governance, academic, fiscal, bureaucratic, and competition.
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
Fiber pixelated image database
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke
2016-08-01
Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.
QBIC project: querying images by content, using color, texture, and shape
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel
1993-04-01
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Medin, P; Yordy, J
2014-06-01
Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less
NOAA Data Rescue of Key Solar Databases and Digitization of Historical Solar Images
NASA Astrophysics Data System (ADS)
Coffey, H. E.
2006-08-01
Over a number of years, the staff at NOAA National Geophysical Data Center (NGDC) has worked to rescue key solar databases by converting them to digital format and making them available via the World Wide Web. NOAA has had several data rescue programs where staff compete for funds to rescue important and critical historical data that are languishing in archives and at risk of being lost due to deteriorating condition, loss of any metadata or descriptive text that describe the databases, lack of interest or funding in maintaining databases, etc. The Solar-Terrestrial Physics Division at NGDC was able to obtain funds to key in some critical historical tabular databases. Recently the NOAA Climate Database Modernization Program (CDMP) funded a project to digitize historical solar images, producing a large online database of historical daily full disk solar images. The images include the wavelengths Calcium K, Hydrogen Alpha, and white light photos, as well as sunspot drawings and the comprehensive drawings of a multitude of solar phenomena on one daily map (Fraunhofer maps and Wendelstein drawings). Included in the digitization are high resolution solar H-alpha images taken at the Boulder Solar Observatory 1967-1984. The scanned daily images document many phases of solar activity, from decadal variation to rotational variation to daily changes. Smaller versions are available online. Larger versions are available by request. See http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarimages.html. The tabular listings and solar imagery will be discussed.
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
2016-01-01
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-03-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2016-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.
VizieR Online Data Catalog: MHOs toward 22 regions with H2 fluxes (Wolf-Chase+, 2017)
NASA Astrophysics Data System (ADS)
Wolf-Chase, G.; Arvidsson, K.; Smutko, M.
2018-03-01
We obtained H2 2.12um, H2 2.25um, and H2 continuum) images of 26 regions thought to contain massive YSOs, using the Near-infrared Camera and Fabry-Perot Spectrometer (NICFPS) on the Astrophysical Research Consortium (ARC) 3.5m telescope at the Apache Point Observatory (APO) in Sunspot, NM. (3 data files).
Mission Connect Mild TBI Translational Research Consortium
2013-08-01
year 2 and 3 annual progress report we completed the morphometric and the diffusion tensor imaging (DTI) in cortical impact injury with and without...and morphometric measures. Response to EAB feedback: No concerns were expressed by the External Advisory Board about the MRI core. Manuscript...automatic analysis of morphometry, DTI, and MTR for both humans and rodents. Completed morphometric and DTI analysis in traumatically injured animals
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information from a non-BIA bureau? 1000.73 Section 1000.73 Indians OFFICE OF THE... § 1000.73 Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...
Building structural similarity database for metric learning
NASA Astrophysics Data System (ADS)
Jin, Guoxin; Pappas, Thrasyvoulos N.
2015-03-01
We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Engineering Ligninolytic Consortium for Bioconversion of Lignocelluloses to Ethanol and Chemicals.
Bilal, Muhammad; Nawaz, Muhammad Zohaib; Iqbal, Hafiz M N; Hou, Jialin; Mahboob, Shahid; Al-Ghanim, Khalid A; Cheng, Hairong
2018-01-01
Rising environmental concerns and recent global scenario of cleaner production and consumption are leading to the design of green industrial processes to produce alternative fuels and chemicals. Although bioethanol is one of the most promising and eco-friendly alternatives to fossil fuels yet its production from food and feed has received much negative criticism. The main objective of this study was to present the noteworthy potentialities of lignocellulosic biomass as an enormous and renewable biological resource. The particular focus was also given on engineering ligninolytic consortium for bioconversion of lignocelluloses to ethanol and chemicals on sustainable and environmentally basis. Herein, an effort has been made to extensively review, analyze and compile salient information related to the topic of interest. Several authentic bibliographic databases including PubMed, Scopus, Elsevier, Springer, Bentham Science and other scientific databases were searched with utmost care, and inclusion/ exclusion criterion was adopted to appraise the quality of retrieved peer-reviewed research literature. Bioethanol production from lignocellulosic biomass can largely satisfy the possible inconsistency of first-generation ethanol since it utilizes inedible lignocellulosic feedstocks, primarily sourced from agriculture and forestry wastes. Two major polysaccharides in lignocellulosic biomass namely, cellulose and hemicellulose constitute a complex lignocellulosic network by connecting with lignin, which is highly recalcitrant to depolymerization. Several attempts have been made to reduce the cost involved in the process through improving the pretreatment process. While, the ligninolytic enzymes of white rot fungi (WRF) including laccase, lignin peroxidase (LiP), and manganese peroxidase (MnP) have appeared as versatile biocatalysts for delignification of several lignocellulosic residues. The first part of the review is mainly focused on engineering ligninolytic consortium. In the second part, WRF and its unique ligninolytic enzyme-based bio-delignification of lignocellulosic biomass, enzymatic hydrolysis, and fermentation of hydrolyzed feedstock are discussed. The metabolic engineering, enzymatic engineering, synthetic biology aspects for ethanol production and platform chemicals production are comprehensively reviewed in the third part. Towards the end information is also given on futuristic viewpoints. In conclusion, given the present unpredicted scenario of energy and fuel crisis accompanied by global warming, lignocellulosic bioethanol holds great promise as an alternative to petroleum. Apart from bioethanol, the simultaneous production of other value-added products may improve the economics of lignocellulosic bioethanol bioconversion process. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Architecture for biomedical multimedia information delivery on the World Wide Web
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
1997-10-01
Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.
A complete database for the Einstein imaging proportional counter
NASA Technical Reports Server (NTRS)
Helfand, David J.
1991-01-01
A complete database for the Einstein Imaging Proportional Counter (IPC) was completed. The original data that makes up the archive is described as well as the structure of the database, the Op-Ed analysis system, the technical advances achieved relative to the analysis of (IPC) data, the data products produced, and some uses to which the database has been put by scientists outside Columbia University over the past year.
Mazziotta, John C; Woods, Roger; Iacoboni, Marco; Sicotte, Nancy; Yaden, Kami; Tran, Mary; Bean, Courtney; Kaplan, Jonas; Toga, Arthur W
2009-02-01
In the course of developing an atlas and reference system for the normal human brain throughout the human age span from structural and functional brain imaging data, the International Consortium for Brain Mapping (ICBM) developed a set of "normal" criteria for subject inclusion and the associated exclusion criteria. The approach was to minimize inclusion of subjects with any medical disorders that could affect brain structure or function. In the past two years, a group of 1685 potential subjects responded to solicitation advertisements at one of the consortium sites (UCLA). Subjects were screened by a detailed telephone interview and then had an in-person history and physical examination. Of those who responded to the advertisement and considered themselves to be normal, only 31.6% (532 subjects) passed the telephone screening process. Of the 348 individuals who submitted to in-person history and physical examinations, only 51.7% passed these screening procedures. Thus, only 10.7% of those individuals who responded to the original advertisement qualified for imaging. The most frequent cause for exclusion in the second phase of subject screening was high blood pressure followed by abnormal signs on neurological examination. It is concluded that the majority of individuals who consider themselves normal by self-report are found not to be so by detailed historical interviews about underlying medical conditions and by thorough medical and neurological examinations. Recommendations are made with regard to the inclusion of subjects in brain imaging studies and the criteria used to select them.
Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary
2018-01-01
Purpose The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. Materials and Methods This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R2, Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Results Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R2, 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R2, 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Conclusion Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice. PMID:28253564
Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary
2018-01-01
The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R 2 , Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R 2 , 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R 2 , 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice.
Hayn, Matthew H; Hussain, Abid; Mansour, Ahmed M; Andrews, Paul E; Carpentier, Paul; Castle, Erik; Dasgupta, Prokar; Rimington, Peter; Thomas, Raju; Khan, Shamim; Kibel, Adam; Kim, Hyung; Manoharan, Murugesan; Menon, Mani; Mottrie, Alex; Ornstein, David; Peabody, James; Pruthi, Raj; Palou Redorta, Joan; Richstone, Lee; Schanne, Francis; Stricker, Hans; Wiklund, Peter; Chandrasekhar, Rameela; Wilding, Greg E; Guru, Khurshid A
2010-08-01
Robot-assisted radical cystectomy (RARC) has evolved as a minimally invasive alternative to open radical cystectomy for patients with invasive bladder cancer. We sought to define the learning curve for RARC by evaluating results from a multicenter, contemporary, consecutive series of patients who underwent this procedure. Utilizing the International Robotic Cystectomy Consortium database, a prospectively maintained and institutional review board-approved database, we identified 496 patients who underwent RARC by 21 surgeons at 14 institutions from 2003 to 2009. Cut-off points for operative time, lymph node yield (LNY), estimated blood loss (EBL), and margin positivity were identified. Using specifically designed statistical mixed models, we were able to inversely predict the number of patients required for an institution to reach the predetermined cut-off points. Mean operative time was 386 min, mean EBL was 408 ml, and mean LNY was 18. Overall, 34 of 482 patients (7%) had a positive surgical margin (PSM). Using statistical models, it was estimated that 21 patients were required for operative time to reach 6.5h and 8, 20, and 30 patients were required to reach an LNY of 12, 16, and 20, respectively. For all patients, PSM rates of <5% were achieved after 30 patients. For patients with pathologic stage higher than T2, PSM rates of <15% were achieved after 24 patients. RARC is a challenging procedure but is a technique that is reproducible throughout multiple centers. This report helps to define the learning curve for RARC and demonstrates an acceptable level of proficiency by the 30th case for proxy measures of RARC quality. Copyright (c) 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
National Land Cover Database 2001 (NLCD01)
LaMotte, Andrew E.
2016-01-01
This 30-meter data set represents land use and land cover for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System (see http://water.usgs.gov/GIS/browse/nlcd01-partition.jpg). The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004), (see: http://www.mrlc.gov/mrlc2k.asp). The NLCD 2001 was created by partitioning the United States into mapping zones. A total of 68 mapping zones (see http://water.usgs.gov/GIS/browse/nlcd01-mappingzones.jpg), were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Image-guided decision support system for pulmonary nodule classification in 3D thoracic CT images
NASA Astrophysics Data System (ADS)
Kawata, Yoshiki; Niki, Noboru; Ohmatsu, Hironobu; Kusumoto, Masahiro; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, Kozo; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki
2004-05-01
The purpose of this study is to develop an image-guided decision support system that assists decision-making in clinical differential diagnosis of pulmonary nodules. This approach retrieves and displays nodules that exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. In order to build the system, there are following issues that should be solved: 1) to categorize the nodule database with respect to morphological and internal features, 2) to quickly search nodule images similar to an indeterminate nodule from a large database, and 3) to reveal malignancy likelihood computed by using similar nodule images. Especially, the first problem influences the design of other issues. The successful categorization of nodule pattern might lead physicians to find important cues that characterize benign and malignant nodules. This paper focuses on an approach to categorize the nodule database with respect to nodule shape and CT density patterns inside nodule.
Distributed data collection for a database of radiological image interpretations
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data.
Zhang, Junjun; Baran, Joachim; Cros, A; Guberman, Jonathan M; Haider, Syed; Hsu, Jack; Liang, Yong; Rivkin, Elena; Wang, Jianxin; Whitty, Brett; Wong-Erasmus, Marie; Yao, Long; Kasprzyk, Arek
2011-01-01
The International Cancer Genome Consortium (ICGC) is a collaborative effort to characterize genomic abnormalities in 50 different cancer types. To make this data available, the ICGC has created the ICGC Data Portal. Powered by the BioMart software, the Data Portal allows each ICGC member institution to manage and maintain its own databases locally, while seamlessly presenting all the data in a single access point for users. The Data Portal currently contains data from 24 cancer projects, including ICGC, The Cancer Genome Atlas (TCGA), Johns Hopkins University, and the Tumor Sequencing Project. It consists of 3478 genomes and 13 cancer types and subtypes. Available open access data types include simple somatic mutations, copy number alterations, structural rearrangements, gene expression, microRNAs, DNA methylation and exon junctions. Additionally, simple germline variations are available as controlled access data. The Data Portal uses a web-based graphical user interface (GUI) to offer researchers multiple ways to quickly and easily search and analyze the available data. The web interface can assist in constructing complicated queries across multiple data sets. Several application programming interfaces are also available for programmatic access. Here we describe the organization, functionality, and capabilities of the ICGC Data Portal.
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
DIGITAL CARTOGRAPHY OF THE PLANETS: NEW METHODS, ITS STATUS, AND ITS FUTURE.
Batson, R.M.
1987-01-01
A system has been developed that establishes a standardized cartographic database for each of the 19 planets and major satellites that have been explored to date. Compilation of the databases involves both traditional and newly developed digital image processing and mosaicking techniques, including radiometric and geometric corrections of the images. Each database, or digital image model (DIM), is a digital mosaic of spacecraft images that have been radiometrically and geometrically corrected and photometrically modeled. During compilation, ancillary data files such as radiometric calibrations and refined photometric values for all camera lens and filter combinations and refined camera-orientation matrices for all images used in the mapping are produced.
Blokland, Gabriëlla A M; Del Re, Elisabetta C; Mesholam-Gately, Raquelle I; Jovicich, Jorge; Trampush, Joey W; Keshavan, Matcheri S; DeLisi, Lynn E; Walters, James T R; Turner, Jessica A; Malhotra, Anil K; Lencz, Todd; Shenton, Martha E; Voineskos, Aristotle N; Rujescu, Dan; Giegling, Ina; Kahn, René S; Roffman, Joshua L; Holt, Daphne J; Ehrlich, Stefan; Kikinis, Zora; Dazzan, Paola; Murray, Robin M; Di Forti, Marta; Lee, Jimmy; Sim, Kang; Lam, Max; Wolthusen, Rick P F; de Zwarte, Sonja M C; Walton, Esther; Cosgrove, Donna; Kelly, Sinead; Maleki, Nasim; Osiecki, Lisa; Picchioni, Marco M; Bramon, Elvira; Russo, Manuela; David, Anthony S; Mondelli, Valeria; Reinders, Antje A T S; Falcone, M Aurora; Hartmann, Annette M; Konte, Bettina; Morris, Derek W; Gill, Michael; Corvin, Aiden P; Cahn, Wiepke; Ho, New Fei; Liu, Jian Jun; Keefe, Richard S E; Gollub, Randy L; Manoach, Dara S; Calhoun, Vince D; Schulz, S Charles; Sponheim, Scott R; Goff, Donald C; Buka, Stephen L; Cherkerzian, Sara; Thermenos, Heidi W; Kubicki, Marek; Nestor, Paul G; Dickie, Erin W; Vassos, Evangelos; Ciufolini, Simone; Reis Marques, Tiago; Crossley, Nicolas A; Purcell, Shaun M; Smoller, Jordan W; van Haren, Neeltje E M; Toulopoulou, Timothea; Donohoe, Gary; Goldstein, Jill M; Seidman, Larry J; McCarley, Robert W; Petryshen, Tracey L
2018-05-01
Schizophrenia has a large genetic component, and the pathways from genes to illness manifestation are beginning to be identified. The Genetics of Endophenotypes of Neurofunction to Understand Schizophrenia (GENUS) Consortium aims to clarify the role of genetic variation in brain abnormalities underlying schizophrenia. This article describes the GENUS Consortium sample collection. We identified existing samples collected for schizophrenia studies consisting of patients, controls, and/or individuals at familial high-risk (FHR) for schizophrenia. Samples had single nucleotide polymorphism (SNP) array data or genomic DNA, clinical and demographic data, and neuropsychological and/or brain magnetic resonance imaging (MRI) data. Data were subjected to quality control procedures at a central site. Sixteen research groups contributed data from 5199 psychosis patients, 4877 controls, and 725 FHR individuals. All participants have relevant demographic data and all patients have relevant clinical data. The sex ratio is 56.5% male and 43.5% female. Significant differences exist between diagnostic groups for premorbid and current IQ (both p<1×10 -10 ). Data from a diversity of neuropsychological tests are available for 92% of participants, and 30% have structural MRI scans (half also have diffusion-weighted MRI scans). SNP data are available for 76% of participants. The ancestry composition is 70% European, 20% East Asian, 7% African, and 3% other. The Consortium is investigating the genetic contribution to brain phenotypes in a schizophrenia sample collection of >10,000 participants. The breadth of data across clinical, genetic, neuropsychological, and MRI modalities provides an important opportunity for elucidating the genetic basis of neural processes underlying schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.
Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica
2018-07-01
The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
A mathematical model of neuro-fuzzy approximation in image classification
NASA Astrophysics Data System (ADS)
Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.
2016-06-01
Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design.
ERIC Educational Resources Information Center
Stephenson, Christie
1999-01-01
Examines the Museum Educational Site Licensing (MESL) Project--a cooperative project between seven cultural heritage repositories and seven universities--as well as other developments of cultural heritage image databases for academic use. Reviews recent literature on image indexing and retrieval, interface design, and tool development, urging a…
EROS Main Image File: A Picture Perfect Database for Landsat Imagery and Aerial Photography.
ERIC Educational Resources Information Center
Jack, Robert F.
1984-01-01
Describes Earth Resources Observation System online database, which provides access to computerized images of Earth obtained via satellite. Highlights include retrieval system and commands, types of images, search strategies, other online functions, and interpretation of accessions. Satellite information, sources and samples of accessions, and…
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
2017-09-14
SCI2016_0001: SOFIA/GREAT [O I] spectrum at 4.7 THz (63 μm) superimposed on a picture of Mars. Absorption line depth is approximately 10% of the continuum. The abundance of atomic oxygen computed from the data is less than expected from the Forget et al. 1999 global circulation & photochemical model. Credit: SOFIA/GREAT spectrum: NASA/DLR/USRA/DSI/MPIfR/GREAT Consortium/MPIfS/Rezac et al. 2015; Mars image: NASA
NASA Astrophysics Data System (ADS)
Konstantinidis, A.; Anaxagoras, T.; Esposito, M.; Allinson, N.; Speller, R.
2012-03-01
X-ray diffraction studies are used to identify specific materials. Several laboratory-based x-ray diffraction studies were made for breast cancer diagnosis. Ideally a large area, low noise, linear and wide dynamic range digital x-ray detector is required to perform x-ray diffraction measurements. Recently, digital detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been used in x-ray diffraction studies. Two APS detectors, namely Vanilla and Large Area Sensor (LAS), were developed by the Multidimensional Integrated Intelligent Imaging (MI-3) consortium to cover a range of scientific applications including x-ray diffraction. The MI-3 Plus consortium developed a novel large area APS, named as Dynamically Adjustable Medical Imaging Technology (DynAMITe), to combine the key characteristics of Vanilla and LAS with a number of extra features. The active area (12.8 × 13.1 cm2) of DynaMITe offers the ability of angle dispersive x-ray diffraction (ADXRD). The current study demonstrates the feasibility of using DynaMITe for breast cancer diagnosis by identifying six breast-equivalent plastics. Further work will be done to optimize the system in order to perform ADXRD for identification of suspicious areas of breast tissue following a conventional mammogram taken with the same sensor.
Thompson, Paul M.
2016-01-01
Sex differences in brain development and aging are important to identify, as they may help to understand risk factors and outcomes in brain disorders that are more prevalent in one sex compared with the other. Brain imaging techniques have advanced rapidly in recent years, yielding detailed structural and functional maps of the living brain. Even so, studies are often limited in sample size, and inconsistent findings emerge, one example being varying findings regarding sex differences in the size of the corpus callosum. More recently, large‐scale neuroimaging consortia such as the Enhancing Neuro Imaging Genetics through Meta Analysis Consortium have formed, pooling together expertise, data, and resources from hundreds of institutions around the world to ensure adequate power and reproducibility. These initiatives are helping us to better understand how brain structure is affected by development, disease, and potential modulators of these effects, including sex. This review highlights some established and disputed sex differences in brain structure across the life span, as well as pitfalls related to interpreting sex differences in health and disease. We also describe sex‐related findings from the ENIGMA consortium, and ongoing efforts to better understand sex differences in brain circuitry. © 2016 The Authors. Journal of Neuroscience Research Published by Wiley Periodicals, Inc. PMID:27870421
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Carazo, J M; Stelzer, E H
1999-01-01
The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.
Development of a 2001 National Land Cover Database for the United States
Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael
2004-01-01
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
A novel method for efficient archiving and retrieval of biomedical images using MPEG-7
NASA Astrophysics Data System (ADS)
Meyer, Joerg; Pahwa, Ash
2004-10-01
Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome
Sarika; Arora, Vasu; Iquebal, M. A.; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on ‘three-tier architecture’ that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers’ search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/ PMID:23396298
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome.
Sarika; Arora, Vasu; Iquebal, M A; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on 'three-tier architecture' that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers' search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/
Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials
NASA Astrophysics Data System (ADS)
Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.
2007-03-01
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.
Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma
2014-02-01
Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.
Using DSDP/ODP/IODP core photographs and digital images in the classroom
NASA Astrophysics Data System (ADS)
Pereira, Hélder; Berenguer, Jean-Luc
2017-04-01
Since the late 1960's, several scientific ocean drilling programmes have been uncovering the history of the Earth hidden beneath the seafloor. The adventure began in 1968 with the Deep Sea Drilling Project (DSDP) and its special drill ship, the Glomar Challenger. The next stage was the Ocean Drilling Program (ODP) launched in 1985 with a new drill ship, the JOIDES Resolution. The exploration of the ocean seafloor continued, between 2003 and 2013, through the Integrated Ocean Drilling Program (IODP). During that time, in addition to the JOIDES Resolution, operated by the US, the scientists had at their service the Chikyu, operated by Japan, and Mission-Specific-Platforms, funded and implemented by the European Consortium for Ocean Research Drilling. Currently, scientific ocean drilling continues through the collaboration of scientists from 25 nations within the International Ocean Discovery Program (IODP). Over the last 50 years, the scientific ocean drilling expeditions conducted by these programmes have drilled and cored more than 3500 holes. The numerous sediment and rock samples recovered from the ocean floor have provided important insight on the active biological, chemical, and geological processes that have shaped the Earth over millions of years. During an expedition, once the 9.5-meter long cores arrive from the seafloor, the technicians label and cut them into 1.5-meter sections. Next, the shipboard scientists perform several analysis using non-destructive methods. Afterward, the technicians split the cores into two halves, the "working half", which scientists sample and use aboard the drilling platform, and the "archive half", which is kept in untouched condition after being visually described and photographed with a digital imaging system. The shipboard photographer also takes several close-up pictures of the archive-half core sections. This work presents some examples of how teachers can use DSDP/ODP/IODP core photographs and digital images, available through the Janus and LIMS online databases, to develop inquiry-based learning activities for secondary level students.
Progress of Interoperability in Planetary Research for Geospatial Data Analysis
NASA Astrophysics Data System (ADS)
Hare, T. M.; Gaddis, L. R.
2015-12-01
For nearly a decade there has been a push in the planetary science community to support interoperable methods of accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (i.e., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized image formats that retain geographic information (e.g., GeoTiff, GeoJpeg2000), digital geologic mapping conventions, planetary extensions for symbols that comply with U.S. Federal Geographic Data Committee cartographic and geospatial metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they have been modified to support the planetary domain. The motivation to support common, interoperable data format and delivery standards is not only to improve access for higher-level products but also to address the increasingly distributed nature of the rapidly growing volumes of data. The strength of using an OGC approach is that it provides consistent access to data that are distributed across many facilities. While data-steaming standards are well-supported by both the more sophisticated tools used in Geographic Information System (GIS) and remote sensing industries, they are also supported by many light-weight browsers which facilitates large and small focused science applications and public use. Here we provide an overview of the interoperability initiatives that are currently ongoing in the planetary research community, examples of their successful application, and challenges that remain.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Lehman, Constance D; Arao, Robert F; Sprague, Brian L; Lee, Janie M; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Onega, Tracy; Tosteson, Anna N A; Rauscher, Garth H; Miglioretti, Diana L
2017-04-01
Purpose To establish performance benchmarks for modern screening digital mammography and assess performance trends over time in U.S. community practice. Materials and Methods This HIPAA-compliant, institutional review board-approved study measured the performance of digital screening mammography interpreted by 359 radiologists across 95 facilities in six Breast Cancer Surveillance Consortium (BCSC) registries. The study included 1 682 504 digital screening mammograms performed between 2007 and 2013 in 792 808 women. Performance measures were calculated according to the American College of Radiology Breast Imaging Reporting and Data System, 5th edition, and were compared with published benchmarks by the BCSC, the National Mammography Database, and performance recommendations by expert opinion. Benchmarks were derived from the distribution of performance metrics across radiologists and were presented as 50th (median), 10th, 25th, 75th, and 90th percentiles, with graphic presentations using smoothed curves. Results Mean screening performance measures were as follows: abnormal interpretation rate (AIR), 11.6 (95% confidence interval [CI]: 11.5, 11.6); cancers detected per 1000 screens, or cancer detection rate (CDR), 5.1 (95% CI: 5.0, 5.2); sensitivity, 86.9% (95% CI: 86.3%, 87.6%); specificity, 88.9% (95% CI: 88.8%, 88.9%); false-negative rate per 1000 screens, 0.8 (95% CI: 0.7, 0.8); positive predictive value (PPV) 1, 4.4% (95% CI: 4.3%, 4.5%); PPV2, 25.6% (95% CI: 25.1%, 26.1%); PPV3, 28.6% (95% CI: 28.0%, 29.3%); cancers stage 0 or 1, 76.9%; minimal cancers, 57.7%; and node-negative invasive cancers, 79.4%. Recommended CDRs were achieved by 92.1% of radiologists in community practice, and 97.1% achieved recommended ranges for sensitivity. Only 59.0% of radiologists achieved recommended AIRs, and only 63.0% achieved recommended levels of specificity. Conclusion The majority of radiologists in the BCSC surpass cancer detection recommendations for screening mammography; however, AIRs continue to be higher than the recommended rate for almost half of radiologists interpreting screening mammograms. © RSNA, 2016 Online supplemental material is available for this article.
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
Major results of the MAARBLE project
NASA Astrophysics Data System (ADS)
Daglis, Ioannis A.; Bourdarie, Sebastien; Horne, Richard B.; Khotyaintsev, Yuri; Mann, Ian R.; Santolik, Ondrej; Turner, Drew L.; Balasis, Georgios
2016-04-01
The goal of the MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Loss and Energization) project was to shed light on the ways the dynamic evolution of the Van Allen belts is influenced by low-frequency electromagnetic waves. MAARBLE was implemented by a consortium of seven institutions (five European, one Canadian and one US) with support from the European Community's Seventh Framework Programme. The MAARBLE project employed multi-spacecraft monitoring of the geospace environment, complemented by ground-based monitoring, in order to analyze and assess the physical mechanisms leading to radiation belt particle energisation and loss. Particular attention was paid to the role of ULF/VLF waves. Within MAARBLE we created a database containing properties of ULF and VLF waves, based on measurements from the Cluster, THEMIS and CHAMP missions and from the CARISMA and IMAGE ground magnetometer networks. The database is now available to the scientific community through the Cluster Science Archive as auxiliary content. Based on the wave database, a statistical model of the wave activity dependent on the level of geomagnetic activity, solar wind forcing, and magnetospheric region has been developed. Multi-spacecraft particle measurements have been incorporated into data assimilation tools, leading to a more accurate estimate of the state of the radiation belts. The synergy of wave and particle observations is in the core of MAARBLE research studies of radiation belt dynamics. Results and conclusions from these studies will be presented in this paper. The MAARBLE (Monitoring, Analyzing and Assessing Radiation Belt Energization and Loss) collaborative research project has received funding from the European Unions Seventh Framework Programme (FP7-SPACE 2011-1) under grant agreement no. 284520. The complete MAARBLE Team: Ioannis A. Daglis, Sebastien Bourdarie, Richard B. Horne, Yuri Khotyaintsev, Ian R. Mann, Ondrej Santolik, Drew L. Turner, Georgios Balasis, Anastasios Anastasiadis, Vassilis Angelopoulos, David Barona, Eleni Chatzichristou, Stavros Dimitrakoudis, Marina Georgiou, Omiros Giannakis, Sarah Glauert, Benjamin Grison, Zuzana Hrbackova, Andy Kale, Christos Katsavrias, Tobias Kersten, Ivana Kolmasova, Didier Lazaro, Eva Macusova, Vincent Maget, Meghan Mella, Nigel Meredith, Fiori-Anastasia Metallinou, David Milling, Louis Ozeke, Constantinos Papadimitriou, George Ropokis, Ingmar Sandberg, Maria Usanova, Iannis Dandouras, David Sibeck, Eftyhia Zesta.
NASA Astrophysics Data System (ADS)
Stroker, K. J.; Jencks, J. H.; Eakins, B.
2016-12-01
The Index to Marine and Lacustrine Geological Samples (IMLGS) is a community designed and maintained resource enabling researchers to locate and request seafloor and lakebed geologic samples curated by partner institutions. The Index was conceived in the dawn of the digital age by representatives from U.S. academic and government marine core repositories and the NOAA National Geophysical Data Center, now the National Centers for Environmental Information (NCEI), at a 1977 meeting convened by the National Science Foundation (NSF). The Index is based on core concepts of community oversight, common vocabularies, consistent metadata and a shared interface. The Curators Consortium, international in scope, meets biennially to share ideas and discuss best practices. NCEI serves the group by providing database access and maintenance, a list server, digitizing support and long-term archival of sample metadata, data and imagery. Over three decades, participating curators have performed the laborious task of creating and contributing metadata for over 205,000 sea floor and lake-bed cores, grabs, and dredges archived in their collections. Some partners use the Index for primary web access to their collections while others use it to increase exposure of more in-depth institutional systems. The IMLGS has a persistent URL/Digital Object Identifier (DOI), as well as DOIs assigned to partner collections for citation and to provide a persistent link to curator collections. The Index is currently a geospatially-enabled relational database, publicly accessible via Web Feature and Web Map Services, and text- and ArcGIS map-based web interfaces. To provide as much knowledge as possible about each sample, the Index includes curatorial contact information and links to related data, information and images : 1) at participating institutions, 2) in the NCEI archive, and 3) through a Linked Data interface maintained by the Rolling Deck to Repository R2R. Over 43,000 International GeoSample Numbers (IGSNs) linking to the System for Earth Sample Registration (SESAR) are included in anticipation of opportunities for interconnectivity with Integrated Earth Data Applications (IEDA) systems. The paper will discuss the database with a goal to increase the connections and links to related data at partner institutions.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
Korean Variant Archive (KOVA): a reference database of genetic variations in the Korean population.
Lee, Sangmoon; Seo, Jihae; Park, Jinman; Nam, Jae-Yong; Choi, Ahyoung; Ignatius, Jason S; Bjornson, Robert D; Chae, Jong-Hee; Jang, In-Jin; Lee, Sanghyuk; Park, Woong-Yang; Baek, Daehyun; Choi, Murim
2017-06-27
Despite efforts to interrogate human genome variation through large-scale databases, systematic preference toward populations of Caucasian descendants has resulted in unintended reduction of power in studying non-Caucasians. Here we report a compilation of coding variants from 1,055 healthy Korean individuals (KOVA; Korean Variant Archive). The samples were sequenced to a mean depth of 75x, yielding 101 singleton variants per individual. Population genetics analysis demonstrates that the Korean population is a distinct ethnic group comparable to other discrete ethnic groups in Africa and Europe, providing a rationale for such independent genomic datasets. Indeed, KOVA conferred 22.8% increased variant filtering power in addition to Exome Aggregation Consortium (ExAC) when used on Korean exomes. Functional assessment of nonsynonymous variant supported the presence of purifying selection in Koreans. Analysis of copy number variants detected 5.2 deletions and 10.3 amplifications per individual with an increased fraction of novel variants among smaller and rarer copy number variable segments. We also report a list of germline variants that are associated with increased tumor susceptibility. This catalog can function as a critical addition to the pre-existing variant databases in pursuing genetic studies of Korean individuals.
Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco
2014-05-01
The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.
ERIC Educational Resources Information Center
Grooms, David W.
1988-01-01
Discusses the quality controls imposed on text and image data that is currently being converted from paper to digital images by the Patent and Trademark Office. The methods of inspection used on text and on images are described, and the quality of the data delivered thus far is discussed. (CLB)
Fernando, Ruani N; Chaudhari, Umesh; Escher, Sylvia E; Hengstler, Jan G; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C; Kleinjans, Jos C S; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G J; van de Water, Bob; Vivier, Manon; Zahedi, René P; Vinken, Mathieu; Rogiers, Vera
2016-06-01
SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro- and in silico-based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and "-omics" technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitute a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich "-omics" databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project.
Fernando, Ruani N.; Chaudhari, Umesh; Escher, Sylvia E.; Hengstler, Jan G.; Hescheler, Jürgen; Jennings, Paul; Keun, Hector C.; Kleinjans, Jos C. S.; Kolde, Raivo; Kollipara, Laxmikanth; Kopp-Schneider, Annette; Limonciel, Alice; Nemade, Harshal; Nguemo, Filomain; Peterson, Hedi; Prieto, Pilar; Rodrigues, Robim M.; Sachinidis, Agapios; Schäfer, Christoph; Sickmann, Albert; Spitkovsky, Dimitry; Stöber, Regina; van Breda, Simone G.J.; van de Water, Bob; Vivier, Manon; Zahedi, René P.
2017-01-01
SEURAT-1 is a joint research initiative between the European Commission and Cosmetics Europe aiming to develop in vitro and in silico based methods to replace the in vivo repeated dose systemic toxicity test used for the assessment of human safety. As one of the building blocks of SEURAT-1, the DETECTIVE project focused on a key element on which in vitro toxicity testing relies: the development of robust and reliable, sensitive and specific in vitro biomarkers and surrogate endpoints that can be used for safety assessments of chronically acting toxicants, relevant for humans. The work conducted by the DETECTIVE consortium partners has established a screening pipeline of functional and “-omics” technologies, including high-content and high-throughput screening platforms, to develop and investigate human biomarkers for repeated dose toxicity in cellular in vitro models. Identification and statistical selection of highly predictive biomarkers in a pathway- and evidence-based approach constitutes a major step in an integrated approach towards the replacement of animal testing in human safety assessment. To discuss the final outcomes and achievements of the consortium, a meeting was organized in Brussels. This meeting brought together data-producing and supporting consortium partners. The presentations focused on the current state of ongoing and concluding projects and the strategies employed to identify new relevant biomarkers of toxicity. The outcomes and deliverables, including the dissemination of results in data-rich “-omics” databases, were discussed as were the future perspectives of the work completed under the DETECTIVE project. Although some projects were still in progress and required continued data analysis, this report summarizes the presentations, discussions and the outcomes of the project. PMID:27129694
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.
2016-01-01
Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T
2016-04-07
Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. Copyright © 2016 by the American Society of Nephrology.
Incorporating the APS Catalog of the POSS I and Image Archive in ADS
NASA Technical Reports Server (NTRS)
Humphreys, Roberta M.
1998-01-01
The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.
Techniques of Photometry and Astrometry with APASS, Gaia, and Pan-STARRs Results (Abstract)
NASA Astrophysics Data System (ADS)
Green, W.
2017-12-01
(Abstract only) The databases with the APASS DR9, Gaia DR1, and the Pan-STARRs 3pi DR1 data releases are publicly available for use. There is a bit of data-mining involved to download and manage these reference stars. This paper discusses the use of these databases to acquire accurate photometric references as well as techniques for improving results. Images are prepared in the usual way: zero, dark, flat-fields, and WCS solutions with Astrometry.net. Images are then processed with Sextractor to produce an ASCII table of identifying photometric features. The database manages photometics catalogs and images converted to ASCII tables. Scripts convert the files into SQL and assimilate them into database tables. Using SQL techniques, each image star is merged with reference data to produce publishable results. The VYSOS has over 13,000 images of the ONC5 field to process with roughly 100 total fields in the campaign. This paper provides the overview for this daunting task.
2008-11-01
17ºC; red: 17-18ºC. Although the image produced in Figure 9 is useful, the image itself is not the most important aspect of the process . Two...climatology for the Scotian Shelf. The database is intended for use while ashore and also while at-sea. Trial Q316 was the maiden voyage of the database...to the process of data transfer from external sources to the database, and also how the database can be restructured to be more accommodating of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, K; Phillips, M; Fishburn, M
Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less
Logue, Mark W; van Rooij, Sanne J H; Dennis, Emily L; Davis, Sarah L; Hayes, Jasmeet P; Stevens, Jennifer S; Densmore, Maria; Haswell, Courtney C; Ipser, Jonathan; Koch, Saskia B J; Korgaonkar, Mayuresh; Lebois, Lauren A M; Peverill, Matthew; Baker, Justin T; Boedhoe, Premika S W; Frijling, Jessie L; Gruber, Staci A; Harpaz-Rotem, Ilan; Jahanshad, Neda; Koopowitz, Sheri; Levy, Ifat; Nawijn, Laura; O'Connor, Lauren; Olff, Miranda; Salat, David H; Sheridan, Margaret A; Spielberg, Jeffrey M; van Zuiden, Mirjam; Winternitz, Sherry R; Wolff, Jonathan D; Wolf, Erika J; Wang, Xin; Wrocklage, Kristen; Abdallah, Chadi G; Bryant, Richard A; Geuze, Elbert; Jovanovic, Tanja; Kaufman, Milissa L; King, Anthony P; Krystal, John H; Lagopoulos, Jim; Bennett, Maxwell; Lanius, Ruth; Liberzon, Israel; McGlinchey, Regina E; McLaughlin, Katie A; Milberg, William P; Miller, Mark W; Ressler, Kerry J; Veltman, Dick J; Stein, Dan J; Thomaes, Kathleen; Thompson, Paul M; Morey, Rajendra A
2018-02-01
Many studies report smaller hippocampal and amygdala volumes in posttraumatic stress disorder (PTSD), but findings have not always been consistent. Here, we present the results of a large-scale neuroimaging consortium study on PTSD conducted by the Psychiatric Genomics Consortium (PGC)-Enhancing Neuroimaging Genetics through Meta-Analysis (ENIGMA) PTSD Working Group. We analyzed neuroimaging and clinical data from 1868 subjects (794 PTSD patients) contributed by 16 cohorts, representing the largest neuroimaging study of PTSD to date. We assessed the volumes of eight subcortical structures (nucleus accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus, and lateral ventricle). We used a standardized image-analysis and quality-control pipeline established by the ENIGMA consortium. In a meta-analysis of all samples, we found significantly smaller hippocampi in subjects with current PTSD compared with trauma-exposed control subjects (Cohen's d = -0.17, p = .00054), and smaller amygdalae (d = -0.11, p = .025), although the amygdala finding did not survive a significance level that was Bonferroni corrected for multiple subcortical region comparisons (p < .0063). Our study is not subject to the biases of meta-analyses of published data, and it represents an important milestone in an ongoing collaborative effort to examine the neurobiological underpinnings of PTSD and the brain's response to trauma. Published by Elsevier Inc.
Logue, Mark W.; van Rooij, Sanne J.H.; Dennis, Emily L.; Davis, Sarah L.; Hayes, Jasmeet P.; Stevens, Jennifer S.; Densmore, Maria; Haswell, Courtney C.; Ipser, Jonathan; Koch, Saskia B.J.; Korgaonkar, Mayuresh; Lebois, Lauren A.M.; Peverill, Matthew; Baker, Justin T.; Boedhoe, Premika S.W.; Frijling, Jessie L.; Gruber, Staci A.; Harpaz-Rotem, Ilan; Jahanshad, Neda; Koopowitz, Sheri; Levy, Ifat; Nawijn, Laura; O’Connor, Lauren; Olff, Miranda; Salat, David H.; Sheridan, Margaret A.; Spielberg, Jeffrey M.; van Zuiden, Mirjam; Winternitz, Sherry R.; Wolff, Jonathan D.; Wolf, Erika J.; Wang, Xin; Wrocklage, Kristen; Abdallah, Chadi G.; Bryant, Richard A.; Geuze, Elbert; Jovanovic, Tanja; Kaufman, Milissa L.; King, Anthony P.; Krystal, John H.; Lagopoulos, Jim; Bennett, Maxwell; Lanius, Ruth; Liberzon, Israel; McGlinchey, Regina E.; McLaughlin, Katie A.; Milberg, William P.; Miller, Mark W.; Ressler, Kerry J.; Veltman, Dick J.; Stein, Dan J.; Thomaes, Kathleen; Thompson, Paul M.; Morey, Rajendra A.
2018-01-01
BACKGROUND Many studies report smaller hippocampal and amygdala volumes in posttraumatic stress disorder (PTSD), but findings have not always been consistent. Here, we present the results of a large-scale neuroimaging consortium study on PTSD conducted by the Psychiatric Genomics Consortium (PGC)–Enhancing Neuroimaging Genetics through Meta-Analysis (ENIGMA) PTSD Working Group. METHODS We analyzed neuroimaging and clinical data from 1868 subjects (794 PTSD patients) contributed by 16 cohorts, representing the largest neuroimaging study of PTSD to date. We assessed the volumes of eight subcortical structures (nucleus accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus, and lateral ventricle). We used a standardized image-analysis and quality-control pipeline established by the ENIGMA consortium. RESULTS In a meta-analysis of all samples, we found significantly smaller hippocampi in subjects with current PTSD compared with trauma-exposed control subjects (Cohen’s d = −0.17, p = .00054), and smaller amygdalae (d = −0.11, p = .025), although the amygdala finding did not survive a significance level that was Bonferroni corrected for multiple subcortical region comparisons (p < .0063). CONCLUSIONS Our study is not subject to the biases of meta-analyses of published data, and it represents an important milestone in an ongoing collaborative effort to examine the neurobiological underpinnings of PTSD and the brain’s response to trauma. PMID:29217296
Establishment of Kawasaki disease database based on metadata standard.
Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk
2016-07-01
Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.
2016 update of the PRIDE database and its related tools
Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning
2016-01-01
The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722
PanScan, the Pancreatic Cancer Cohort Consortium, and the Pancreatic Cancer Case-Control Consortium
The Pancreatic Cancer Cohort Consortium consists of more than a dozen prospective epidemiologic cohort studies within the NCI Cohort Consortium, whose leaders work together to investigate the etiology and natural history of pancreatic cancer.
NASA Astrophysics Data System (ADS)
Kroneberger, Monika; Calleri, Andrea; Ulfers, Hendrik; Klossek, Andreas; Goepel, Michael
2017-09-01
The Meteosat Third Generation (MTG) program will ensure the continuity and enhancement of meteorological data from geostationary orbit as currently provided by the Meteosat Second Generation (MSG) system. OHB-Munich, as part of the core team consortium of the industrial prime contractor for the space segment Thales Alenia Space (France), is responsible for the Flexible Combined Imager - Telescope Assembly (FCI-TA) as well as the Infrared Sounder (IRS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ronald W.; Collins, Benjamin S.; Godfrey, Andrew T.
2016-12-09
In order to support engineering analysis of Virtual Environment for Reactor Analysis (VERA) model results, the Consortium for Advanced Simulation of Light Water Reactors (CASL) needs a tool that provides visualizations of HDF5 files that adhere to the VERAOUT specification. VERAView provides an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The Python-based software provides instantaneous 2D and 3D images, 1D plots, and alphanumeric data from VERA multi-physics simulations.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Mazziotta, John C.; Woods, Roger; Iacoboni, Marco; Sicotte, Nancy; Yaden, Kami; Tran, Mary; Bean, Courtney; Kaplan, Jonas; Toga, Arthur W.
2009-01-01
In the course of developing an atlas and reference system for the normal human brain throughout the human age span from structural and functional brain imaging data, the International Consortium for Brain Mapping (ICBM) developed a set of “normal” criteria for subject inclusion and the associated exclusion criteria. The approach was to minimize inclusion of subjects with any medical disorders that could affect brain structure or function. In the past two years, a group of 1,685 potential subjects responded to solicitation advertisements at one of the consortium sites (UCLA). Subjects were screened by a detailed telephone interview and then had an in-person history and physical examination. Of those who responded to the advertisement and considered themselves to be normal, only 31.6% (532 subjects) passed the telephone screening process. Of the 348 individuals who submitted to in-person history and physical examinations, only 51.7% passed these screening procedures. Thus, only 10.7% of those individuals who responded to the original advertisement qualified for imaging. The most frequent cause for exclusion in the second phase of subject screening was high blood pressure followed by abnormal signs on neurological examination. It is concluded that the majority of individuals who consider themselves normal by self-report are found not to be so by detailed historical interviews about underlying medical conditions and by thorough medical and neurological examinations. Recommendations are made with regard to the inclusion of subjects in brain imaging studies and the criteria used to select them. PMID:18775497
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
Rosenthal, Alex; Gabrielian, Andrei; Engle, Eric; Hurt, Darrell E; Alexandru, Sofia; Crudu, Valeriu; Sergueev, Eugene; Kirichenko, Valery; Lapitskii, Vladzimir; Snezhko, Eduard; Kovalev, Vassili; Astrovko, Andrei; Skrahina, Alena; Taaffe, Jessica; Harris, Michael; Long, Alyssa; Wollenberg, Kurt; Akhundova, Irada; Ismayilova, Sharafat; Skrahin, Aliaksandr; Mammadbayov, Elcan; Gadirova, Hagigat; Abuzarov, Rafik; Seyfaddinova, Mehriban; Avaliani, Zaza; Strambu, Irina; Zaharia, Dragos; Muntean, Alexandru; Ghita, Eugenia; Bogdan, Miron; Mindru, Roxana; Spinu, Victor; Sora, Alexandra; Ene, Catalina; Vashakidze, Sergo; Shubladze, Natalia; Nanava, Ucha; Tuzikov, Alexander; Tartakovsky, Michael
2017-11-01
The TB Portals program is an international consortium of physicians, radiologists, and microbiologists from countries with a heavy burden of drug-resistant tuberculosis working with data scientists and information technology professionals. Together, we have built the TB Portals, a repository of socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from patient cases of drug-resistant tuberculosis backed by shareable, physical samples. Currently, there are 1,299 total cases from five country sites (Azerbaijan, Belarus, Moldova, Georgia, and Romania), 976 (75.1%) of which are multidrug or extensively drug resistant and 38.2%, 51.9%, and 36.3% of which contain X-ray, computed tomography (CT) scan, and genomic data, respectively. The top Mycobacterium tuberculosis lineages represented among collected samples are Beijing, T1, and H3, and single nucleotide polymorphisms (SNPs) that confer resistance to isoniazid, rifampin, ofloxacin, and moxifloxacin occur the most frequently. These data and samples have promoted drug discovery efforts and research into genomics and quantitative image analysis to improve diagnostics while also serving as a valuable resource for researchers and clinical providers. The TB Portals database and associated projects are continually growing, and we invite new partners and collaborations to our initiative. The TB Portals data and their associated analytical and statistical tools are freely available at https://tbportals.niaid.nih.gov/.
Gabrielian, Andrei; Engle, Eric; Hurt, Darrell E.; Alexandru, Sofia; Crudu, Valeriu; Sergueev, Eugene; Kirichenko, Valery; Lapitskii, Vladzimir; Snezhko, Eduard; Kovalev, Vassili; Astrovko, Andrei; Skrahina, Alena; Harris, Michael; Long, Alyssa; Wollenberg, Kurt; Akhundova, Irada; Ismayilova, Sharafat; Skrahin, Aliaksandr; Mammadbayov, Elcan; Gadirova, Hagigat; Abuzarov, Rafik; Seyfaddinova, Mehriban; Avaliani, Zaza; Strambu, Irina; Zaharia, Dragos; Muntean, Alexandru; Ghita, Eugenia; Bogdan, Miron; Mindru, Roxana; Spinu, Victor; Sora, Alexandra; Ene, Catalina; Vashakidze, Sergo; Shubladze, Natalia; Nanava, Ucha; Tuzikov, Alexander; Tartakovsky, Michael
2017-01-01
ABSTRACT The TB Portals program is an international consortium of physicians, radiologists, and microbiologists from countries with a heavy burden of drug-resistant tuberculosis working with data scientists and information technology professionals. Together, we have built the TB Portals, a repository of socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from patient cases of drug-resistant tuberculosis backed by shareable, physical samples. Currently, there are 1,299 total cases from five country sites (Azerbaijan, Belarus, Moldova, Georgia, and Romania), 976 (75.1%) of which are multidrug or extensively drug resistant and 38.2%, 51.9%, and 36.3% of which contain X-ray, computed tomography (CT) scan, and genomic data, respectively. The top Mycobacterium tuberculosis lineages represented among collected samples are Beijing, T1, and H3, and single nucleotide polymorphisms (SNPs) that confer resistance to isoniazid, rifampin, ofloxacin, and moxifloxacin occur the most frequently. These data and samples have promoted drug discovery efforts and research into genomics and quantitative image analysis to improve diagnostics while also serving as a valuable resource for researchers and clinical providers. The TB Portals database and associated projects are continually growing, and we invite new partners and collaborations to our initiative. The TB Portals data and their associated analytical and statistical tools are freely available at https://tbportals.niaid.nih.gov/. PMID:28904183
Building confidence and credibility into CAD with belief decision trees
NASA Astrophysics Data System (ADS)
Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.
2017-03-01
Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Northern New Jersey Nursing Education Consortium: a partnership for graduate nursing education.
Quinless, F W; Levin, R F
1998-01-01
The purpose of this article is to describe the evolution and implementation of the Northern New Jersey Nursing Education consortium--a consortium of seven member institutions established in 1992. Details regarding the specific functions of the consortium relative to cross-registration of students in graduate courses, financial disbursement of revenue, faculty development activities, student services, library privileges, and institutional research review board mechanisms are described. The authors also review the administrative organizational structure through which the work conducted by the consortium occurs. Both the advantages and disadvantages of such a graduate consortium are explored, and specific examples of recent potential and real conflicts are fully discussed. The authors detail governance and structure of the consortium as a potential model for replication in other environments.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
Visualization and manipulating the image of a formal data structure (FDS)-based database
NASA Astrophysics Data System (ADS)
Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien
1994-08-01
A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.
Multiple Representations-Based Face Sketch-Photo Synthesis.
Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie
2016-11-01
Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
10 CFR 603.515 - Qualification of a consortium.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Recipient Qualification § 603.515 Qualification of a consortium. (a) A consortium that... under the agreement. (b) If the prospective recipient of a TIA is a consortium that is not formally...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Participation in Tribal Self-Governance Withdrawal from A Consortium Annual Funding Agreement § 1000.33 What... the Consortium agreement is reduced if: (1) The Consortium, Tribe, OSG, and bureau agree it is...
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Bernardo, Theresa M; Malinowski, Robert P
2005-01-01
In this article, advances in the application of medical media to education, clinical care, and research are explored and illustrated with examples, and their future potential is discussed. Impact is framed in terms of the Sloan Consortium's five pillars of quality education: access; student and faculty satisfaction; learning effectiveness; and cost effectiveness. (Hiltz SR, Zhang Y, Turoff M. Studies of effectiveness of learning networks. In Bourne J, Moore J, ed. Elements of Quality Online Education. Needham, MA: Sloan-Consortium, 2002:15-45). The alternatives for converting analog media (text, photos, graphics, sound, video, animations, radiographs) to digital media and direct digital capture are covered, as are options for storing, manipulating, retrieving, and sharing digital collections. Diagnostic imaging is given particular attention, clarifying the difference between computerized radiography and digital radiography and explaining the accepted standard (DICOM) and the advantages of Web PACS. Some novel research applications of medical media are presented.
Retinal Disease in Marfan Syndrome: From the Marfan Eye Consortium of Chicago.
Rahmani, Safa; Lyon, Alice T; Fawzi, Amani A; Maumenee, Irene H; Mets, Marilyn B
2015-10-01
To study the prevalence of peripheral retinal disease in patients with Marfan Syndrome (MFS). In this observational, cross-sectional case series, patients with MFS were recruited by the Marfan Eye Consortium of Chicago during the National Marfan Foundation's annual conference. Patients underwent a fully dilated exam by vitreoretinal specialists in addition to ultra-widefield fundus photography using a scanning laser ophthalmoscope (Optos 200Tx; Optos PLC, Dunfermline, Scotland, United Kingdom). Clinical examination revealed posterior segment pathology in 18% of eyes with increased incidence to 70% in patients with a subluxed lens. In six out of 10 subjects in whom the clinical exam was suboptimal (young age, small pupil, and limited cooperation), the Optos provided a superior view of the peripheral retina compared to clinical exam alone. Clinical exam of MFS patients revealed similar posterior segment pathology as noted in previous literature, with improved detection of peripheral retinal disease with the use of ultra-widefield imaging. Copyright 2015, SLACK Incorporated.
Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar
2016-10-01
Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Patel, Ashokkumar A; Gilbertson, John R; Showe, Louise C; London, Jack W; Ross, Eric; Ochs, Michael F; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V; Dhir, Rajiv; Beck, J Robert; Liebman, Michael; Garcia, Fernando U; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B; Becich, Michael J
2007-06-08
The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium's intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions' IRB/HIPAA standards. Currently, this "virtual biorepository" has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium's web site. The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
25 CFR 1000.310 - What information must the Tribe's/Consortium's response contain?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What information must the Tribe's/Consortium's response... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.310 What information must the Tribe's/Consortium's response contain? (a) The Tribe's/Consortium's response must indicate the specific measures that...
25 CFR 1000.255 - May a Tribe/Consortium reallocate funds among construction programs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false May a Tribe/Consortium reallocate funds among... INDIAN SELF-DETERMINATION AND EDUCATION ACT Construction § 1000.255 May a Tribe/Consortium reallocate funds among construction programs? Yes, a Tribe/Consortium may reallocate funds among construction...
25 CFR 1000.310 - What information must the Tribe's/Consortium's response contain?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What information must the Tribe's/Consortium's response... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.310 What information must the Tribe's/Consortium's response contain? (a) The Tribe's/Consortium's response must indicate the specific measures that...
25 CFR 1000.255 - May a Tribe/Consortium reallocate funds among construction programs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false May a Tribe/Consortium reallocate funds among... INDIAN SELF-DETERMINATION AND EDUCATION ACT Construction § 1000.255 May a Tribe/Consortium reallocate funds among construction programs? Yes, a Tribe/Consortium may reallocate funds among construction...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development Regulations Relating to Housing and... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
24 CFR 943.124 - What elements must a consortium agreement contain?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false What elements must a consortium agreement contain? 943.124 Section 943.124 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND... elements must a consortium agreement contain? (a) The consortium agreement among the participating PHAs...
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.
2013-12-01
The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.
Semi-automatic feedback using concurrence between mixture vectors for general databases
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Richard, Noel; Colot, Olivier; Fernandez-Maloigne, Christine
2001-12-01
This paper describes how a query system can exploit the basic knowledge by employing semi-automatic relevance feedback to refine queries and runtimes. For general databases, it is often useless to call complex attributes, because we have not sufficient information about images in the database. Moreover, these images can be topologically very different from one to each other and an attribute that is powerful for a database category may be very powerless for the other categories. The idea is to use very simple features, such as color histogram, correlograms, Color Coherence Vectors (CCV), to fill out the signature vector. Then, a number of mixture vectors is prepared depending on the number of very distinctive categories in the database. Knowing that a mixture vector is a vector containing the weight of each attribute that will be used to compute a similarity distance. We post a query in the database using successively all the mixture vectors defined previously. We retain then the N first images for each vector in order to make a mapping using the following information: Is image I present in several mixture vectors results? What is its rank in the results? These informations allow us to switch the system on an unsupervised relevance feedback or user's feedback (supervised feedback).
Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases
Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert
2010-01-01
Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
An end to end secure CBIR over encrypted medical database.
Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole
2016-08-01
In this paper, we propose a new secure content based image retrieval (SCBIR) system adapted to the cloud framework. This solution allows a physician to retrieve images of similar content within an outsourced and encrypted image database, without decrypting them. Contrarily to actual CBIR approaches in the encrypted domain, the originality of the proposed scheme stands on the fact that the features extracted from the encrypted images are themselves encrypted. This is achieved by means of homomorphic encryption and two non-colluding servers, we however both consider as honest but curious. In that way an end to end secure CBIR process is ensured. Experimental results carried out on a diabetic retinopathy database encrypted with the Paillier cryptosystem indicate that our SCBIR achieves retrieval performance as good as if images were processed in their non-encrypted form.
Palmprint Recognition Across Different Devices.
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.
Palmprint Recognition across Different Devices
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380
NASA Technical Reports Server (NTRS)
vonOfenheim. William H. C.; Heimerl, N. Lynn; Binkley, Robert L.; Curry, Marty A.; Slater, Richard T.; Nolan, Gerald J.; Griswold, T. Britt; Kovach, Robert D.; Corbin, Barney H.; Hewitt, Raymond W.
1998-01-01
This paper discusses the technical aspects of and the project background for the NASA Image exchange (NIX). NIX, which provides a single entry point to search selected image databases at the NASA Centers, is a meta-search engine (i.e., a search engine that communicates with other search engines). It uses these distributed digital image databases to access photographs, animations, and their associated descriptive information (meta-data). NIX is available for use at the following URL: http://nix.nasa.gov./NIX, which was sponsored by NASAs Scientific and Technical Information (STI) Program, currently serves images from seven NASA Centers. Plans are under way to link image databases from three additional NASA Centers. images and their associated meta-data, which are accessible by NIX, reside at the originating Centers, and NIX utilizes a virtual central site that communicates with each of these sites. Incorporated into the virtual central site are several protocols to support searches from a diverse collection of database engines. The searches are performed in parallel to ensure optimization of response times. To augment the search capability, browse functionality with pre-defined categories has been built into NIX, thereby ensuring dissemination of 'best-of-breed' imagery. As a final recourse, NIX offers access to a help desk via an on-line form to help locate images and information either within the scope of NIX or from available external sources.
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Baig, Sheharyar S; Strong, Mark; Rosser, Elisabeth; Taverner, Nicola V; Glew, Ruth; Miedzybrodzka, Zosia; Clarke, Angus; Craufurd, David; Quarrell, Oliver W
2016-10-01
Huntington's disease (HD) is a progressive neurodegenerative condition. At-risk individuals have accessed predictive testing via direct mutation testing since 1993. The UK Huntington's Prediction Consortium has collected anonymised data on UK predictive tests, annually, from 1993 to 2014: 9407 predictive tests were performed across 23 UK centres. Where gender was recorded, 4077 participants were male (44.3%) and 5122 were female (55.7%). The median age of participants was 37 years. The most common reason for predictive testing was to reduce uncertainty (70.5%). Of the 8441 predictive tests on individuals at 50% prior risk, 4629 (54.8%) were reported as mutation negative and 3790 (44.9%) were mutation positive, with 22 (0.3%) in the database being uninterpretable. Using a prevalence figure of 12.3 × 10(-5), the cumulative uptake of predictive testing in the 50% at-risk UK population from 1994 to 2014 was estimated at 17.4% (95% CI: 16.9-18.0%). We present the largest study conducted on predictive testing in HD. Our findings indicate that the vast majority of individuals at risk of HD (>80%) have not undergone predictive testing. Future therapies in HD will likely target presymptomatic individuals; therefore, identifying the at-risk population whose gene status is unknown is of significant public health value.
Papachristou, Georgios I; Machicado, Jorge D; Stevens, Tyler; Goenka, Mahesh Kumar; Ferreira, Miguel; Gutierrez, Silvia C; Singh, Vikesh K; Kamal, Ayesha; Gonzalez-Gonzalez, Jose A; Pelaez-Luna, Mario; Gulla, Aiste; Zarnescu, Narcis O; Triantafyllou, Konstantinos; Barbu, Sorin T; Easler, Jeffrey; Ocampo, Carlos; Capurso, Gabriele; Archibugi, Livia; Cote, Gregory A; Lambiase, Louis; Kochhar, Rakesh; Chua, Tiffany; Tiwari, Subhash Ch; Nawaz, Haq; Park, Walter G; de-Madaria, Enrique; Lee, Peter J; Wu, Bechien U; Greer, Phil J; Dugum, Mohannad; Koutroumpakis, Efstratios; Akshintala, Venkata; Gougol, Amir
2017-01-01
We have established a multicenter international consortium to better understand the natural history of acute pancreatitis (AP) worldwide and to develop a platform for future randomized clinical trials. The AP patient registry to examine novel therapies in clinical experience (APPRENTICE) was formed in July 2014. Detailed web-based questionnaires were then developed to prospectively capture information on demographics, etiology, pancreatitis history, comorbidities, risk factors, severity biomarkers, severity indices, health-care utilization, management strategies, and outcomes of AP patients. Between November 2015 and September 2016, a total of 20 sites (8 in the United States, 5 in Europe, 3 in South America, 2 in Mexico and 2 in India) prospectively enrolled 509 AP patients. All data were entered into the REDCap (Research Electronic Data Capture) database by participating centers and systematically reviewed by the coordinating site (University of Pittsburgh). The approaches and methodology are described in detail, along with an interim report on the demographic results. APPRENTICE, an international collaboration of tertiary AP centers throughout the world, has demonstrated the feasibility of building a large, prospective, multicenter patient registry to study AP. Analysis of the collected data may provide a greater understanding of AP and APPRENTICE will serve as a future platform for randomized clinical trials.
Retrospective access to data: the ENGAGE consent experience
Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R
2010-01-01
The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium. PMID:20332813
Retrospective access to data: the ENGAGE consent experience.
Tassé, Anne Marie; Budin-Ljøsne, Isabelle; Knoppers, Bartha Maria; Harris, Jennifer R
2010-07-01
The rapid emergence of large-scale genetic databases raises issues at the nexus of medical law and ethics, as well as the need, at both national and international levels, for an appropriate and effective framework for their governance. This is even more so for retrospective access to data for secondary uses, wherein the original consent did not foresee such use. The first part of this paper provides a brief historical overview of the ethical and legal frameworks governing consent issues in biobanking generally, before turning to the secondary use of retrospective data in epidemiological biobanks. Such use raises particularly complex issues when (1) the original consent provided is restricted; (2) the minor research subject reaches legal age; (3) the research subject dies; or (4) samples and data were obtained during medical care. Our analysis demonstrates the inconclusive, and even contradictory, nature of guidelines and confirms the current lack of compatible regulations. The second part of this paper uses the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium) as a case study to illustrate the challenges of research using previously collected data sets in Europe. Our study of 52 ENGAGE consent forms and information documents shows that a broad range of mechanisms were developed to enable secondary use of the data that are part of the ENGAGE Consortium.
The International Outer Planets Watch atmospheres node database of giant-planet images
NASA Astrophysics Data System (ADS)
Hueso, R.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Gómez-Forrellad, J. M.
2011-10-01
The Atmospheres Node of the International Outer Planets Watch (IOPW) is aimed to encourage the observations and study of the atmospheres of the Giant Planets. One of its main activities is to provide an interaction between the professional and amateur astronomical communities maintaining an online and fully searchable database of images of the giant planets obtained from amateur astronomers and available to both professional and amateurs [1]. The IOPW database contains about 13,000 image observations of Jupiter and Saturn obtained in the visible range with a few contributions of Uranus and Neptune. We describe the organization and structure of the database as posted in the Internet and in particular the PVOL software (Planetary Virtual Observatory & Laboratory) designed to manage the site and based in concepts from Virtual Observatory projects.
Development of Technology for Image-Guided Proton Therapy
2012-10-01
develop data analysis software Install and test tablet PCs Year 2 ending 9/30/2009 Design PET scanner Design mechanical gantry...of the PET instrument Measure positron-emitting isotope production Use dual-energy CT and MRI to determine the composition of materials Year...forms on tablet PCs Phase 5 Scope of Work Year 1 ending 9/30/2009 Identify a vendor consortium to develop a solution for CBCT on or near
NASA Technical Reports Server (NTRS)
D'Sa, Eurico; Miller, Richard; DelCastillo, Carlos
2004-01-01
NASA's projects for the Mississippi River Coastal Margin Study include Mississippi River Interdisciplinary Research (MiRIR) and NASA Experimental Program to Stimulate Competitive Research (EPSCoR). These projects, undertaken with the help of Tulane University and the Louisiana Universities Marine Consortium (LUMCON) sampled water in the Gulf of Mexico to measure colored dissolved organic matter (CDOM). This viewgraph presentation contains images of each program's sampling strategy and equipment.
Jahanshad, Neda; Thompson, Paul M
2017-01-02
Sex differences in brain development and aging are important to identify, as they may help to understand risk factors and outcomes in brain disorders that are more prevalent in one sex compared with the other. Brain imaging techniques have advanced rapidly in recent years, yielding detailed structural and functional maps of the living brain. Even so, studies are often limited in sample size, and inconsistent findings emerge, one example being varying findings regarding sex differences in the size of the corpus callosum. More recently, large-scale neuroimaging consortia such as the Enhancing Neuro Imaging Genetics through Meta Analysis Consortium have formed, pooling together expertise, data, and resources from hundreds of institutions around the world to ensure adequate power and reproducibility. These initiatives are helping us to better understand how brain structure is affected by development, disease, and potential modulators of these effects, including sex. This review highlights some established and disputed sex differences in brain structure across the life span, as well as pitfalls related to interpreting sex differences in health and disease. We also describe sex-related findings from the ENIGMA consortium, and ongoing efforts to better understand sex differences in brain circuitry. © 2016 The Authors. Journal of Neuroscience Research Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Neuroscience Research Published by Wiley Periodicals, Inc.
Active Galactic Nuclei, Quasars, BL Lac Objects and X-Ray Background
NASA Technical Reports Server (NTRS)
Mushotzky, Richard (Technical Monitor); Elvis, Martin
2005-01-01
The XMM COSMOS survey is producing the large surface density of X-ray sources anticipated. The first batch of approx. 200 sources is being studied in relation to the large scale structure derived from deep optical/near-IR imaging from Subaru and CFHT. The photometric redshifts from the opt/IR imaging program allow a first look at structure vs. redshift, identifying high z clusters. A consortium of SAO, U. Arizona and the Carnegie Institute of Washington (Pasadena) has started a large program using the 6.5meter Magellan telescopes in Chile with the prime objective of identifying the XMM X-ray sources in the COSMOS field. The first series of observing runs using the new IMACS multi-slit spectrograph on Magellan will take place in January and February of 2005. Some 300 spectra per field will be taken, including 70%-80% of the XMM sources in each field. The four first fields cover the center of the COSMOS field. A VLT consortium is set to obtain bulk redshifts of the field galaxies. The added accuracy of the spectroscopic redshifts over the photo-z's will allow much lower density structures to be seen, voids and filaments. The association of X-ray selected AGNs, and quasars with these filaments, is a major motivation for our studies. Comparison to the deep VLA radio data now becoming available is about to begin.
HEROD: a human ethnic and regional specific omics database.
Zeng, Xian; Tao, Lin; Zhang, Peng; Qin, Chu; Chen, Shangying; He, Weidong; Tan, Ying; Xia Liu, Hong; Yang, Sheng Yong; Chen, Zhe; Jiang, Yu Yang; Chen, Yu Zong
2017-10-15
Genetic and gene expression variations within and between populations and across geographical regions have substantial effects on the biological phenotypes, diseases, and therapeutic response. The development of precision medicines can be facilitated by the OMICS studies of the patients of specific ethnicity and geographic region. However, there is an inadequate facility for broadly and conveniently accessing the ethnic and regional specific OMICS data. Here, we introduced a new free database, HEROD, a human ethnic and regional specific OMICS database. Its first version contains the gene expression data of 53 070 patients of 169 diseases in seven ethnic populations from 193 cities/regions in 49 nations curated from the Gene Expression Omnibus (GEO), the ArrayExpress Archive of Functional Genomics Data (ArrayExpress), the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC). Geographic region information of curated patients was mainly manually extracted from referenced publications of each original study. These data can be accessed and downloaded via keyword search, World map search, and menu-bar search of disease name, the international classification of disease code, geographical region, location of sample collection, ethnic population, gender, age, sample source organ, patient type (patient or healthy), sample type (disease or normal tissue) and assay type on the web interface. The HEROD database is freely accessible at http://bidd2.nus.edu.sg/herod/index.php. The database and web interface are implemented in MySQL, PHP and HTML with all major browsers supported. phacyz@nus.edu.sg. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Das, I.; Oberai, K.; Sarathi Roy, P.
2012-07-01
Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.
AphidBase: A centralized bioinformatic resource for annotation of the pea aphid genome
Legeai, Fabrice; Shigenobu, Shuji; Gauthier, Jean-Pierre; Colbourne, John; Rispe, Claude; Collin, Olivier; Richards, Stephen; Wilson, Alex C. C.; Tagu, Denis
2015-01-01
AphidBase is a centralized bioinformatic resource that was developed to facilitate community annotation of the pea aphid genome by the International Aphid Genomics Consortium (IAGC). The AphidBase Information System designed to organize and distribute genomic data and annotations for a large international community was constructed using open source software tools from the Generic Model Organism Database (GMOD). The system includes Apollo and GBrowse utilities as well as a wiki, blast search capabilities and a full text search engine. AphidBase strongly supported community cooperation and coordination in the curation of gene models during community annotation of the pea aphid genome. AphidBase can be accessed at http://www.aphidbase.com. PMID:20482635
Code of Federal Regulations, 2010 CFR
2010-04-01
... requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMENT ACT AMENDMENTS TO THE INDIAN SELF-DETERMINATION AND EDUCATION ACT... minimum standards for Tribe/Consortium management systems? Yes, the Tribe/Consortium must maintain...
25 CFR 1000.169 - How does a Tribe/Consortium initiate the information phase?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium initiate the information... of Initial Annual Funding Agreements § 1000.169 How does a Tribe/Consortium initiate the information phase? A Tribe/Consortium initiates the information phase by submitting a letter of interest to the...
25 CFR 1000.169 - How does a Tribe/Consortium initiate the information phase?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium initiate the information... of Initial Annual Funding Agreements § 1000.169 How does a Tribe/Consortium initiate the information phase? A Tribe/Consortium initiates the information phase by submitting a letter of interest to the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... International Consortium of Energy Managers; Notice of Preliminary Permit Application Accepted for Filing and... Consortium of Energy Managers filed an application, pursuant to section 4(f) of the Federal Power Act (FPA...: Rexford Wait, International Consortium of Energy Managers, 2416 Cades Way, Vista, CA 92083; (760) 599-0086...
Patel, Vilas; Patel, Janki; Madamwar, Datta
2013-09-15
A phenanthrene-degrading bacterial consortium (ASP) was developed using sediment from the Alang-Sosiya shipbreaking yard at Gujarat, India. 16S rRNA gene-based molecular analyses revealed that the bacterial consortium consisted of six bacterial strains: Bacillus sp. ASP1, Pseudomonas sp. ASP2, Stenotrophomonas maltophilia strain ASP3, Staphylococcus sp. ASP4, Geobacillus sp. ASP5 and Alcaligenes sp. ASP6. The consortium was able to degrade 300 ppm of phenanthrene and 1000 ppm of naphthalene within 120 h and 48 h, respectively. Tween 80 showed a positive effect on phenanthrene degradation. The consortium was able to consume maximum phenanthrene at the rate of 46 mg/h/l and degrade phenanthrene in the presence of other petroleum hydrocarbons. A microcosm study was conducted to test the consortium's bioremediation potential. Phenanthrene degradation increased from 61% to 94% in sediment bioaugmented with the consortium. Simultaneously, bacterial counts and dehydrogenase activities also increased in the bioaugmented sediment. These results suggest that microbial consortium bioaugmentation may be a promising technology for bioremediation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Image-based diagnostic aid for interstitial lung disease with secondary data integration
NASA Astrophysics Data System (ADS)
Depeursinge, Adrien; Müller, Henning; Hidki, Asmâa; Poletti, Pierre-Alexandre; Platon, Alexandra; Geissbuhler, Antoine
2007-03-01
Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database can bring quick and precious information for example for emergency radiologists. The experience from a pilot project highlighted the need for detailed database containing high-quality annotations in addition to clinical data. The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung disease with secondary data integration. The data acquisition steps are detailed. The selection of the most relevant clinical parameters is done in collaboration with lung specialists from current literature, along with knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions of interest annotated. The available data was used to test primary visual features for the classification of lung tissue patterns. These features show good discriminative properties for the separation of five classes of visual observations.
Digital hand atlas and computer-aided bone age assessment via the Web
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
1999-07-01
A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Development of educational image databases and e-books for medical physics training.
Tabakov, S; Roberts, V C; Jonsson, B-A; Ljungberg, M; Lewis, C A; Wirestam, R; Strand, S-E; Lamm, I-L; Milano, F; Simmons, A; Deane, C; Goss, D; Aitken, V; Noel, A; Giraud, J-Y; Sherriff, S; Smith, P; Clarke, G; Almqvist, M; Jansson, T
2005-09-01
Medical physics education and training requires the use of extensive imaging material and specific explanations. These requirements provide an excellent background for application of e-Learning. The EU projects Consortia EMERALD and EMIT developed five volumes of such materials, now used in 65 countries. EMERALD developed e-Learning materials in three areas of medical physics (X-ray diagnostic radiology, nuclear medicine and radiotherapy). EMIT developed e-Learning materials in two further areas: ultrasound and magnetic resonance imaging. This paper describes the development of these e-Learning materials (consisting of e-books and educational image databases). The e-books include tasks helping studying of various equipment and methods. The text of these PDF e-books is hyperlinked with respective images. The e-books are used through the readers' own Internet browser. Each Image Database (IDB) includes a browser, which displays hundreds of images of equipment, block diagrams and graphs, image quality examples, artefacts, etc. Both the e-books and IDB are engraved on five separate CD-ROMs. Demo of these materials can be taken from www.emerald2.net.
A digital library for medical imaging activities
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sérgio S.
2007-03-01
This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.
Image-based query-by-example for big databases of galaxy images
NASA Astrophysics Data System (ADS)
Shamir, Lior; Kuminski, Evan
2017-01-01
Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.
[Research report of experimental database establishment of digitized virtual Chinese No.1 female].
Zhong, Shi-zhen; Yuan, Lin; Tang, Lei; Huang, Wen-hua; Dai, Jing-xing; Li, Jian-yi; Liu, Chang; Wang, Xing-hai; Li, Hua; Luo, Shu-qian; Qin, Dulie; Zeng, Shao-qun; Wu, Tao; Zhang, Mei-chao; Wu, Kun-cheng; Jiao, Pei-feng; Lu, Yun-tao; Chen, Hao; Li, Pei-liang; Gao, Yuan; Wang, Tong; Fan, Ji-hong
2003-03-01
To establish digitized virtual Chinese No.1 female (VCH-F1) image database. A 19 years old female cadaver was scanned by CT, MRI, and perfused with red filling material through formal artery before freezing and em- bedding. The whole body was cut by JZ1500A vertical milling machine with a 0.2 mm inter-spacing. All the images was produced by Fuji FinePix S2 Pro camera. The body index of VCH-F1 was 94%. We cut 8 556 sections of the whole body, and each image was 17.5 MB in size and the whole database reached 149.7 GB. We have totally 6 versions of the database for different applications. Compared with other databases, VCH-F1 has good representation of the Chinese body shape, colorful filling material in blood vessels providing enough information for future registration and segmentation. Vertical embedding and cutting helped to retain normal human physiological posture, and the image quality and operation efficiency were improved by using various techniques such as one-time freezing and fixation, double-temperature icehouse, large-diameter milling disc and whole body cutting.
25 CFR 1000.425 - How does a Tribe/Consortium request an informal conference?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium request an informal... INDIAN SELF-DETERMINATION AND EDUCATION ACT Appeals § 1000.425 How does a Tribe/Consortium request an informal conference? The Tribe/Consortium shall file its request for an informal conference with the office...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Does a Tribe/Consortium have additional ongoing requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... Miscellaneous Provisions § 1000.396 Does a Tribe/Consortium have additional ongoing requirements to maintain...
25 CFR 1000.222 - How does a Tribe/Consortium obtain a waiver?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium obtain a waiver? 1000.222...-DETERMINATION AND EDUCATION ACT Waiver of Regulations § 1000.222 How does a Tribe/Consortium obtain a waiver? To obtain a waiver, the Tribe/Consortium must: (a) Submit a written request from the designated Tribal...
25 CFR 1000.333 - How does a Tribe/Consortium retrocede a program?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium retrocede a program? 1000.333...-DETERMINATION AND EDUCATION ACT Retrocession § 1000.333 How does a Tribe/Consortium retrocede a program? The Tribe/Consortium must submit: (a) A written notice to: (1) The Office of Self-Governance for BIA...
25 CFR 1000.425 - How does a Tribe/Consortium request an informal conference?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does a Tribe/Consortium request an informal... INDIAN SELF-DETERMINATION AND EDUCATION ACT Appeals § 1000.425 How does a Tribe/Consortium request an informal conference? The Tribe/Consortium shall file its request for an informal conference with the office...
25 CFR 1000.400 - Can a Tribe/Consortium retain savings from programs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false Can a Tribe/Consortium retain savings from programs? 1000...-DETERMINATION AND EDUCATION ACT Miscellaneous Provisions § 1000.400 Can a Tribe/Consortium retain savings from programs? Yes, for BIA programs, the Tribe/Consortium may retain savings for each fiscal year during which...
25 CFR 1000.315 - When must the Tribe/Consortium return funds to the Department?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false When must the Tribe/Consortium return funds to the... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.315 When must the Tribe/Consortium return funds to the Department? The Tribe/Consortium must repay funds to the Department as soon as practical...
25 CFR 1000.333 - How does a Tribe/Consortium retrocede a program?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does a Tribe/Consortium retrocede a program? 1000.333...-DETERMINATION AND EDUCATION ACT Retrocession § 1000.333 How does a Tribe/Consortium retrocede a program? The Tribe/Consortium must submit: (a) A written notice to: (1) The Office of Self-Governance for BIA...
25 CFR 1000.315 - When must the Tribe/Consortium return funds to the Department?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false When must the Tribe/Consortium return funds to the... INDIAN SELF-DETERMINATION AND EDUCATION ACT Reassumption § 1000.315 When must the Tribe/Consortium return funds to the Department? The Tribe/Consortium must repay funds to the Department as soon as practical...
Metro-Minnesota Community Clinical Oncology Program (MM-CCOP) | Division of Cancer Prevention
The Metro-Minnesota Community Clinical Oncology (MMCCOP) program has a long-standing history which clearly demonstrates the success of the consortium, as demonstrated by both the ongoing commitment of the original consortium members and the growth of the consortium from 1979 through 2014. The MMCCOP consortium represents an established community program base which began in
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
A facial expression image database and norm for Asian population: a preliminary report
NASA Astrophysics Data System (ADS)
Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min
2009-01-01
We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.
Basic level scene understanding: categories, attributes and structures
Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude
2013-01-01
A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590
Drug development and nonclinical to clinical translational databases: past and current efforts.
Monticello, Thomas M
2015-01-01
The International Consortium for Innovation and Quality (IQ) in Pharmaceutical Development is a science-focused organization of pharmaceutical and biotechnology companies. The mission of the Preclinical Safety Leadership Group (DruSafe) of the IQ is to advance science-based standards for nonclinical development of pharmaceutical products and to promote high-quality and effective nonclinical safety testing that can enable human risk assessment. DruSafe is creating an industry-wide database to determine the accuracy with which the interpretation of nonclinical safety assessments in animal models correctly predicts human risk in the early clinical development of biopharmaceuticals. This initiative aligns with the 2011 Food and Drug Administration strategic plan to advance regulatory science and modernize toxicology to enhance product safety. Although similar in concept to the initial industry-wide concordance data set conducted by International Life Sciences Institute's Health and Environmental Sciences Institute (HESI/ILSI), the DruSafe database will proactively track concordance, include exposure data and large and small molecules, and will continue to expand with longer duration nonclinical and clinical study comparisons. The output from this work will help identify actual human and animal adverse event data to define both the reliability and the potential limitations of nonclinical data and testing paradigms in predicting human safety in phase 1 clinical trials. © 2014 by The Author(s).
International Lymphoma Epidemiology Consortium
The InterLymph Consortium, or formally the International Consortium of Investigators Working on Non-Hodgkin's Lymphoma Epidemiologic Studies, is an open scientific forum for epidemiologic research in non-Hodgkin's lymphoma.
An online database for plant image analysis software tools.
Lobet, Guillaume; Draye, Xavier; Périlleux, Claire
2013-10-09
Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is best suited for their research. We present an online, manually curated, database referencing more than 90 plant image analysis software solutions. The website, plant-image-analysis.org, presents each software in a uniform and concise manner enabling users to identify the available solutions for their experimental needs. The website also enables user feedback, evaluations and new software submissions. The plant-image-analysis.org database provides an overview of existing plant image analysis software. The aim of such a toolbox is to help users to find solutions, and to provide developers a way to exchange and communicate about their work.
Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.
Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray
2003-07-01
The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Database Development for Ocean Impacts: Imaging, Outreach, and Rapid Response
2012-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Database Development for Ocean Impacts: Imaging, Outreach...Development for Ocean Impacts: Imaging, Outreach, and Rapid Response 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...hoses ( Applied Ocean Physics & Engineering department, WHOI, to evaluate wear and locate in mooring optical cables used in the Right Whale monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, Amber; Harsch, Tim; Pitt, Julie
2007-08-31
The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.