Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
ERIC Educational Resources Information Center
Bae, Kyoung-Il; Kim, Jung-Hyun; Huh, Soon-Young
2003-01-01
Discusses process information sharing among participating organizations in a virtual enterprise and proposes a federated process framework and system architecture that provide a conceptual design for effective implementation of process information sharing supporting the autonomy and agility of the organizations. Develops the framework using an…
Defending against Attribute-Correlation Attacks in Privacy-Aware Information Brokering
NASA Astrophysics Data System (ADS)
Li, Fengjun; Luo, Bo; Liu, Peng; Squicciarini, Anna C.; Lee, Dongwon; Chu, Chao-Hsien
Nowadays, increasing needs for information sharing arise due to extensive collaborations among organizations. Organizations desire to provide data access to their collaborators while preserving full control over the data and comprehensive privacy of their users. A number of information systems have been developed to provide efficient and secure information sharing. However, most of the solutions proposed so far are built atop of conventional data warehousing or distributed database technologies.
NASA Astrophysics Data System (ADS)
Hueni, A.; Schweiger, A. K.
2015-12-01
Field spectrometry has substantially gained importance in vegetation ecology due to the increasing knowledge about causal ties between vegetation spectra and biochemical and structural plant traits. Additionally, worldwide databases enable the exchange of spectral and plant trait data and promote global research cooperation. This can be expected to further enhance the use of field spectrometers in ecological studies. However, the large amount of data collected during spectral field campaigns poses major challenges regarding data management, archiving and processing. The spectral database Specchio is designed to organize, manage, process and share spectral data and metadata. We provide an example for using Specchio based on leaf level spectra of prairie plant species collected during the 2015 field campaign of the Dimensions of Biodiversity research project, conducted at the Cedar Creek Long-Term Ecological Research site, in central Minnesota. We show how spectral data collections can be efficiently administered, organized and shared between distinct research groups and explore the capabilities of Specchio for data quality checks and initial processing steps.
A Support Database System for Integrated System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John
2007-01-01
The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.
The Primate Life History Database: A unique shared ecological data resource
Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.
2011-01-01
Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066
Information And Data-Sharing Plan of IPY China Activity
NASA Astrophysics Data System (ADS)
Zhang, X.; Cheng, W.
2007-12-01
Polar Data-Sharing is an effective resolution to global system and polar science problems and to interdisciplinary and sustainable study, as well as an important means to deal with IPY scientific heritages and realize IPY goals. Corresponding to IPY Data-Sharing policies, Information and Data-Sharing Plan was listed in five sub-plans of IPY Chinese Programme launched in March, 2007,they are Scientific research program of the Prydz Bay, Amery Ice Shelf and Dome A transects(short title:'PANDA'), the Arctic Scientific Research Expedition Plan, International Cooperation Plan, Information and Data-Sharing Plan, Education and Outreach. China, since the foundation of Antarctic Zhongshan Station in 1989, has carried out systematic scientific expeditions and researches in Larsemann Hills, Prydz Bay and the neighbouring sea areas, organized 14 Prydz Bay oceanographic investigations, 3 Amery Ice Shelf expeditions, 4 Grove Mountains expeditions and 5 inland ice cap scientific expeditions. 2 comprehensive oceanographic investigations in the Arctic Ocean were conducted in 1999 and 2003, acquired a large amount of data and samples in PANDA section and fan areas of Pacific Ocean in the Arctic Ocean. A mechanism of basic data submitting ,sharing and archiving has been gradually set up since 2000. Presently, Polar Science Database and Polar Sample Resource Sharing Platform of China with the aim of sharing polar data and samples has been initially established and began to provide sharing service to domestic and oversea users. According to IPY Chinese Activity, 2 scientific expeditions in the Arctic Ocean, 3 in the South Ocean, 2 at Amery Ice Shelf, 1 on Grove Mountains and 2 inland ice cap expeditions on Dome A will be carried out during IPY period. According to the experiences accumulated in the past and the jobs in the future, the Information and Data- Sharing Plan, during 2007-2010, will save, archive, and provide exchange and sharing services upon the data obtained by scientific expeditions on the site of IPY Chinese Programme. Meanwhile, focusing on areas in east Antarctic Dome A-Grove Mountain-Zhongshan Station-Amery Ice Shelf-Prydz Bay Section and the fan areas of Pacific Ocean in the Arctic Ocean, the Plan will also collect and integrate IPY data and historical data and establish database of PANDA Section and the Arctic Ocean. The details are as follows: On the basis of integrating the observed data acquired during the expeditions of China, the Plan will, adopting portal technology, develop 5 subject databases (English version included):(1) Database of Zhongshan Station- Dome A inner land ice cap section;(2) Database of interaction of ocean-ice-atmosphere-ice shelf in east Antarctica;(3) Database of geological and glaciological advance and retreat evolvement in Grove Mountains; (4) Database of Solar Terrestrial Physics at Zhongshan Station; (5) Oceanographic database of fan area of Pacific Ocean in the Arctic Ocean. CN-NADC of PRIC is the institute which assumes the responsibility for the Plan, specifically, it coordinates and organizes the operation of the Plan which includes data management, developing the portal of data and information sharing, and international exchanges. The specific assignments under the Plan will be carried out by research institutes under CAS (Chinese Academy of Sciences), SOA ( State Oceanic Administration), State Bureau of Surveying and Mapping and Ministry of Education.
Longstaff, Holly; Khramova, Vera; Portales-Casamar, Elodie; Illes, Judy
2015-01-01
Research on complex health conditions such as neurodevelopmental disorders increasingly relies on large-scale research and clinical studies that would benefit from data sharing initiatives. Organizations that share data stand to maximize the efficiency of invested research dollars, expedite research findings, minimize the burden on the patient community, and increase citation rates of publications associated with the data. This study examined ethics and governance information on websites of databases involving neurodevelopmental disorders to determine the availability of information on key factors crucial for comprehension of, and trust and participation in such initiatives. We identified relevant databases identified using online keyword searches. Two researchers reviewed each of the websites and identified thematic content using principles from grounded theory. The content for each organization was interrogated using the gap analysis method. Sixteen websites from data sharing organizations met our inclusion criteria. Information about types of data and tissues stored, data access requirements and procedures, and protections for confidentiality were significantly addressed by data sharing organizations. However, special considerations for minors (absent from 63%), controls to check if data and tissues are being submitted (absent from 81%), disaster recovery plans (absent from 81%), and discussions of incidental findings (absent from 88%) emerged as major gaps in thematic website content. When present, content pertaining to special considerations for youth, along with other ethics guidelines and requirements, were scattered throughout the websites or available only from associated documents accessed through live links. The complexities of sharing data acquired from children and adolescents will only increase with advances in genomic and neuro science. Our findings suggest that there is a need to improve the consistency, depth and accessibility of governance and policies on which these collaborations can lean specifically for vulnerable young populations.
Expanding on Successful Concepts, Models, and Organization
If the goal of the AEP framework was to replace existing exposure models or databases for organizing exposure data with a concept, we would share Dr. von Göetz concerns. Instead, the outcome we promote is broader use of an organizational framework for exposure science. The f...
NASA Astrophysics Data System (ADS)
Thakore, Arun K.; Sauer, Frank
1994-05-01
The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.
Marchant, Carol A; Briggs, Katharine A; Long, Anthony
2008-01-01
ABSTRACT Lhasa Limited is a not-for-profit organization that exists to promote the sharing of data and knowledge in chemistry and the life sciences. It has developed the software tools Derek for Windows, Meteor, and Vitic to facilitate such sharing. Derek for Windows and Meteor are knowledge-based expert systems that predict the toxicity and metabolism of a chemical, respectively. Vitic is a chemically intelligent toxicity database. An overview of each software system is provided along with examples of the sharing of data and knowledge in the context of their development. These examples include illustrations of (1) the use of data entry and editing tools for the sharing of data and knowledge within organizations; (2) the use of proprietary data to develop nonconfidential knowledge that can be shared between organizations; (3) the use of shared expert knowledge to refine predictions; (4) the sharing of proprietary data between organizations through the formation of data-sharing groups; and (5) the use of proprietary data to validate predictions. Sharing of chemical toxicity and metabolism data and knowledge in this way offers a number of benefits including the possibilities of faster scientific progress and reductions in the use of animals in testing. Maximizing the accessibility of data also becomes increasingly crucial as in silico systems move toward the prediction of more complex phenomena for which limited data are available.
MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data
1983-06-01
segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0
MPD: a pathogen genome and metagenome database
Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen
2018-01-01
Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Brody, Thomas; Yavatkar, Amarendra S; Kuzin, Alexander; Kundu, Mukta; Tyson, Leonard J; Ross, Jermaine; Lin, Tzu-Yang; Lee, Chi-Hon; Awasaki, Takeshi; Lee, Tzumin; Odenwald, Ward F
2012-01-01
Background: Phylogenetic footprinting has revealed that cis-regulatory enhancers consist of conserved DNA sequence clusters (CSCs). Currently, there is no systematic approach for enhancer discovery and analysis that takes full-advantage of the sequence information within enhancer CSCs. Results: We have generated a Drosophila genome-wide database of conserved DNA consisting of >100,000 CSCs derived from EvoPrints spanning over 90% of the genome. cis-Decoder database search and alignment algorithms enable the discovery of functionally related enhancers. The program first identifies conserved repeat elements within an input enhancer and then searches the database for CSCs that score highly against the input CSC. Scoring is based on shared repeats as well as uniquely shared matches, and includes measures of the balance of shared elements, a diagnostic that has proven to be useful in predicting cis-regulatory function. To demonstrate the utility of these tools, a temporally-restricted CNS neuroblast enhancer was used to identify other functionally related enhancers and analyze their structural organization. Conclusions: cis-Decoder reveals that co-regulating enhancers consist of combinations of overlapping shared sequence elements, providing insights into the mode of integration of multiple regulating transcription factors. The database and accompanying algorithms should prove useful in the discovery and analysis of enhancers involved in any developmental process. Developmental Dynamics 241:169–189, 2012. © 2011 Wiley Periodicals, Inc. Key findings A genome-wide catalog of Drosophila conserved DNA sequence clusters. cis-Decoder discovers functionally related enhancers. Functionally related enhancers share balanced sequence element copy numbers. Many enhancers function during multiple phases of development. PMID:22174086
A crystallographic perspective on sharing data and knowledge
NASA Astrophysics Data System (ADS)
Bruno, Ian J.; Groom, Colin R.
2014-10-01
The crystallographic community is in many ways an exemplar of the benefits and practices of sharing data. Since the inception of the technique, virtually every published crystal structure has been made available to others. This has been achieved through the establishment of several specialist data centres, including the Cambridge Crystallographic Data Centre, which produces the Cambridge Structural Database. Containing curated structures of small organic molecules, some containing a metal, the database has been produced for almost 50 years. This has required the development of complex informatics tools and an environment allowing expert human curation. As importantly, a financial model has evolved which has, to date, ensured the sustainability of the resource. However, the opportunities afforded by technological changes and changing attitudes to sharing data make it an opportune moment to review current practices.
Strabo: An App and Database for Structural Geology and Tectonics Data
NASA Astrophysics Data System (ADS)
Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.
2016-12-01
Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to other sub-disciplines as well as developing a desktop system for the enhanced collection and organization of microstructural data.
Developing Privacy Solutions for Sharing and Analyzing Healthcare Data
Motiwalla, Luvai; Li, Xiao-Bai
2013-01-01
The extensive use of electronic health data has increased privacy concerns. While most healthcare organizations are conscientious in protecting their data in their databases, very few organizations take enough precautions to protect data that is shared with third party organizations. Recently the regulatory environment has tightened the laws to enforce privacy protection. The goal of this research is to explore the application of data masking solutions for protecting patient privacy when data is shared with external organizations for research, analysis and other similar purposes. Specifically, this research project develops a system that protects data without removing sensitive attributes. Our application allows high quality data analysis with the masked data. Dataset-level properties and statistics remain approximately the same after data masking; however, individual record-level values are altered to prevent privacy disclosure. A pilot evaluation study on large real-world healthcare data shows the effectiveness of our solution in privacy protection. PMID:24285983
Multisource feedback, human capital, and the financial performance of organizations.
Kim, Kyoung Yong; Atwater, Leanne; Patel, Pankaj C; Smither, James W
2016-11-01
We investigated the relationship between organizations' use of multisource feedback (MSF) programs and their financial performance. We proposed a moderated mediation framework in which the employees' ability and knowledge sharing mediate the relationship between MSF and organizational performance and the purpose for which MSF is used moderates the relationship of MSF with employees' ability and knowledge sharing. With a sample of 253 organizations representing 8,879 employees from 2005 to 2007 in South Korea, we found that MSF had a positive effect on organizational financial performance via employees' ability and knowledge sharing. We also found that when MSF was used for dual purpose (both administrative and developmental purposes), the relationship between MSF and knowledge sharing was stronger, and this interaction carried through to organizational financial performance. However, the purpose of MSF did not moderate the relationship between MSF and employees' ability. The theoretical relevance and practical implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Database constraints applied to metabolic pathway reconstruction tools.
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.
Technical Communication, Knowledge Management, and XML.
ERIC Educational Resources Information Center
Applen, J. D.
2002-01-01
Describes how technical communicators can become involved in knowledge management. Examines how technical communicators can teach organizations to design, access, and contribute to databases; alert them to new information; and facilitate trust and sharing. Concludes that successful technical communicators would do well to establish a culture that…
BioPAX – A community standard for pathway data sharing
Demir, Emek; Cary, Michael P.; Paley, Suzanne; Fukuda, Ken; Lemer, Christian; Vastrik, Imre; Wu, Guanming; D’Eustachio, Peter; Schaefer, Carl; Luciano, Joanne; Schacherer, Frank; Martinez-Flores, Irma; Hu, Zhenjun; Jimenez-Jacinto, Veronica; Joshi-Tope, Geeta; Kandasamy, Kumaran; Lopez-Fuentes, Alejandra C.; Mi, Huaiyu; Pichler, Elgar; Rodchenkov, Igor; Splendiani, Andrea; Tkachev, Sasha; Zucker, Jeremy; Gopinath, Gopal; Rajasimha, Harsha; Ramakrishnan, Ranjani; Shah, Imran; Syed, Mustafa; Anwar, Nadia; Babur, Ozgun; Blinov, Michael; Brauner, Erik; Corwin, Dan; Donaldson, Sylva; Gibbons, Frank; Goldberg, Robert; Hornbeck, Peter; Luna, Augustin; Murray-Rust, Peter; Neumann, Eric; Reubenacker, Oliver; Samwald, Matthias; van Iersel, Martijn; Wimalaratne, Sarala; Allen, Keith; Braun, Burk; Whirl-Carrillo, Michelle; Dahlquist, Kam; Finney, Andrew; Gillespie, Marc; Glass, Elizabeth; Gong, Li; Haw, Robin; Honig, Michael; Hubaut, Olivier; Kane, David; Krupa, Shiva; Kutmon, Martina; Leonard, Julie; Marks, Debbie; Merberg, David; Petri, Victoria; Pico, Alex; Ravenscroft, Dean; Ren, Liya; Shah, Nigam; Sunshine, Margot; Tang, Rebecca; Whaley, Ryan; Letovksy, Stan; Buetow, Kenneth H.; Rzhetsky, Andrey; Schachter, Vincent; Sobral, Bruno S.; Dogrusoz, Ugur; McWeeney, Shannon; Aladjem, Mirit; Birney, Ewan; Collado-Vides, Julio; Goto, Susumu; Hucka, Michael; Le Novère, Nicolas; Maltsev, Natalia; Pandey, Akhilesh; Thomas, Paul; Wingender, Edgar; Karp, Peter D.; Sander, Chris; Bader, Gary D.
2010-01-01
BioPAX (Biological Pathway Exchange) is a standard language to represent biological pathways at the molecular and cellular level. Its major use is to facilitate the exchange of pathway data (http://www.biopax.org). Pathway data captures our understanding of biological processes, but its rapid growth necessitates development of databases and computational tools to aid interpretation. However, the current fragmentation of pathway information across many databases with incompatible formats presents barriers to its effective use. BioPAX solves this problem by making pathway data substantially easier to collect, index, interpret and share. BioPAX can represent metabolic and signaling pathways, molecular and genetic interactions and gene regulation networks. BioPAX was created through a community process. Through BioPAX, millions of interactions organized into thousands of pathways across many organisms, from a growing number of sources, are available. Thus, large amounts of pathway data are available in a computable form to support visualization, analysis and biological discovery. PMID:20829833
Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex
2015-09-01
As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions.
Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex
2015-01-01
As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919
Secure count query on encrypted genomic data.
Hasan, Mohammad Zahidul; Mahdi, Md Safiur Rahman; Sadat, Md Nazmus; Mohammed, Noman
2018-05-01
Human genomic information can yield more effective healthcare by guiding medical decisions. Therefore, genomics research is gaining popularity as it can identify potential correlations between a disease and a certain gene, which improves the safety and efficacy of drug treatment and can also develop more effective prevention strategies [1]. To reduce the sampling error and to increase the statistical accuracy of this type of research projects, data from different sources need to be brought together since a single organization does not necessarily possess required amount of data. In this case, data sharing among multiple organizations must satisfy strict policies (for instance, HIPAA and PIPEDA) that have been enforced to regulate privacy-sensitive data sharing. Storage and computation on the shared data can be outsourced to a third party cloud service provider, equipped with enormous storage and computation resources. However, outsourcing data to a third party is associated with a potential risk of privacy violation of the participants, whose genomic sequence or clinical profile is used in these studies. In this article, we propose a method for secure sharing and computation on genomic data in a semi-honest cloud server. In particular, there are two main contributions. Firstly, the proposed method can handle biomedical data containing both genotype and phenotype. Secondly, our proposed index tree scheme reduces the computational overhead significantly for executing secure count query operation. In our proposed method, the confidentiality of shared data is ensured through encryption, while making the entire computation process efficient and scalable for cutting-edge biomedical applications. We evaluated our proposed method in terms of efficiency on a database of Single-Nucleotide Polymorphism (SNP) sequences, and experimental results demonstrate that the execution time for a query of 50 SNPs in a database of 50,000 records is approximately 5 s, where each record contains 500 SNPs. And, it requires 69.7 s to execute the query on the same database that also includes phenotypes. Copyright © 2018 Elsevier Inc. All rights reserved.
Malin, Bradley; Karp, David; Scheuermann, Richard H
2010-01-01
Clinical researchers need to share data to support scientific validation and information reuse and to comply with a host of regulations and directives from funders. Various organizations are constructing informatics resources in the form of centralized databases to ensure reuse of data derived from sponsored research. The widespread use of such open databases is contingent on the protection of patient privacy. We review privacy-related problems associated with data sharing for clinical research from technical and policy perspectives. We investigate existing policies for secondary data sharing and privacy requirements in the context of data derived from research and clinical settings. In particular, we focus on policies specified by the US National Institutes of Health and the Health Insurance Portability and Accountability Act and touch on how these policies are related to current and future use of data stored in public database archives. We address aspects of data privacy and identifiability from a technical, although approachable, perspective and summarize how biomedical databanks can be exploited and seemingly anonymous records can be reidentified using various resources without hacking into secure computer systems. We highlight which clinical and translational data features, specified in emerging research models, are potentially vulnerable or exploitable. In the process, we recount a recent privacy-related concern associated with the publication of aggregate statistics from pooled genome-wide association studies that have had a significant impact on the data sharing policies of National Institutes of Health-sponsored databanks. Based on our analysis and observations we provide a list of recommendations that cover various technical, legal, and policy mechanisms that open clinical databases can adopt to strengthen data privacy protection as they move toward wider deployment and adoption.
Malin, Bradley; Karp, David; Scheuermann, Richard H.
2010-01-01
Clinical researchers need to share data to support scientific validation and information reuse, and to comply with a host of regulations and directives from funders. Various organizations are constructing informatics resources in the form of centralized databases to ensure widespread availability of data derived from sponsored research. The widespread use of such open databases is contingent on the protection of patient privacy. In this paper, we review several aspects of the privacy-related problems associated with data sharing for clinical research from technical and policy perspectives. We begin with a review of existing policies for secondary data sharing and privacy requirements in the context of data derived from research and clinical settings. In particular, we focus on policies specified by the U.S. National Institutes of Health and the Health Insurance Portability and Accountability Act and touch upon how these policies are related to current, as well as future, use of data stored in public database archives. Next, we address aspects of data privacy and “identifiability” from a more technical perspective, and review how biomedical databanks can be exploited and seemingly anonymous records can be “re-identified” using various resources without compromising or hacking into secure computer systems. We highlight which data features specified in clinical research data models are potentially vulnerable or exploitable. In the process, we recount a recent privacy-related concern associated with the publication of aggregate statistics from pooled genome-wide association studies that has had a significant impact on the data sharing policies of NIH-sponsored databanks. Finally, we conclude with a list of recommendations that cover various technical, legal, and policy mechanisms that open clinical databases can adopt to strengthen data privacy protections as they move toward wider deployment and adoption. PMID:20051768
Carotenoids Database: structures, chemical fingerprints and distribution among organisms
2017-01-01
Abstract To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system ‘Search similar carotenoids’ using our original chemical fingerprint ‘Carotenoid DB Chemical Fingerprints’. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool ‘Search similar profiled organisms’. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. Database URL: http://carotenoiddb.jp PMID:28365725
Challenges and Experiences of Building Multidisciplinary Datasets across Cultures
NASA Astrophysics Data System (ADS)
Jamiyansharav, K.; Laituri, M.; Fernandez-Gimenez, M.; Fassnacht, S. R.; Venable, N. B. H.; Allegretti, A. M.; Reid, R.; Baival, B.; Jamsranjav, C.; Ulambayar, T.; Linn, S.; Angerer, J.
2017-12-01
Efficient data sharing and management are key challenges to multidisciplinary scientific research. These challenges are further complicated by adding a multicultural component. We address the construction of a complex database for social-ecological analysis in Mongolia. Funded by the National Science Foundation (NSF) Dynamics of Coupled Natural and Human (CNH) Systems, the Mongolian Rangelands and Resilience (MOR2) project focuses on the vulnerability of Mongolian pastoral systems to climate change and adaptive capacity. The MOR2 study spans over three years of fieldwork in 36 paired districts (Soum) from 18 provinces (Aimag) of Mongolia that covers steppe, mountain forest steppe, desert steppe and eastern steppe ecological zones. Our project team is composed of hydrologists, social scientists, geographers, and ecologists. The MOR2 database includes multiple ecological, social, meteorological, geospatial and hydrological datasets, as well as archives of original data and survey in multiple formats. Managing this complex database requires significant organizational skills, attention to detail and ability to communicate within collective team members from diverse disciplines and across multiple institutions in the US and Mongolia. We describe the database's rich content, organization, structure and complexity. We discuss lessons learned, best practices and recommendations for complex database management, sharing, and archiving in creating a cross-cultural and multi-disciplinary database.
Database Constraints Applied to Metabolic Pathway Reconstruction Tools
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745
Blanquer, Ignacio; Hernandez, Vicente; Segrelles, Damià; Torres, Erik
2007-01-01
Today most European healthcare centers use the digital format for their databases of images. TRENCADIS is a software architecture comprising a set of services as a solution for interconnecting, managing and sharing selected parts of medical DICOM data for the development of training and decision support tools. The organization of the distributed information in virtual repositories is based on semantic criteria. Different groups of researchers could organize themselves to propose a Virtual Organization (VO). These VOs will be interested in specific target areas, and will share information concerning each area. Although the private part of the information to be shared will be removed, special considerations will be taken into account to avoid the access by non-authorized users. This paper describes the security model implemented as part of TRENCADIS. The paper is organized as follows. First introduces the problem and presents our motivations. Section 1 defines the objectives. Section 2 presents an overview of the existing proposals per objective. Section 3 outlines the overall architecture. Section 4 describes how TRENCADIS is architected to realize the security goals discussed in the previous sections. The different security services and components of the infrastructure are briefly explained, as well as the exposed interfaces. Finally, Section 5 concludes and gives some remarks on our future work.
Smith, Elise
2011-11-01
In this article, I study the challenges that make database and material bank sharing difficult for many researchers. I assert that if sharing is prima facie ethical (a view that I will defend), then any practices that limit sharing require justification. I argue that: 1) data and material sharing is ethical for many stakeholders; 2) there are, however, certain reasonable limits to sharing; and 3) the rationale and validity of arguments for any limitations to sharing must be made transparent. I conclude by providing general recommendations for how to ethically share databases and material banks.
The making of a pan-European organ transplant registry.
Smits, Jacqueline M; Niesing, Jan; Breidenbach, Thomas; Collett, Dave
2013-03-01
A European patient registry to track the outcomes of organ transplant recipients does not exist. As knowledge gleaned from large registries has already led to the creation of standards of care that gained widespread support from patients and healthcare providers, the European Union initiated a project that would enable the creation of a European Registry linking currently existing national databases. This report contains a description of all functional, technical, and legal prerequisites, which upon fulfillment should allow for the seamless sharing of national longitudinal data across temporal, geographical, and subspecialty boundaries. To create a platform that can effortlessly link multiple databases and maintain the integrity of the existing national databases crucial elements were described during the project. These elements are: (i) use of a common dictionary, (ii) use of a common database and refined data uploading technology, (iii) use of standard methodology to allow uniform protocol driven and meaningful long-term follow-up analyses, (iv) use of a quality assurance mechanism to guarantee completeness and accuracy of the data collected, and (v) establishment of a solid legal framework that allows for safe data exchange. © 2012 The Authors Transplant International © 2012 European Society for Organ Transplantation. Published by Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.
2015-12-01
The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship requirements, but can provide a template for others to follow.
Network Configuration of Oracle and Database Programming Using SQL
NASA Technical Reports Server (NTRS)
Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.
2000-01-01
A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-01-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685
Hendrickx, Diana M; Boyles, Rebecca R; Kleinjans, Jos C S; Dearry, Allen
2014-12-01
A joint US-EU workshop on enhancing data sharing and exchange in toxicogenomics was held at the National Institute for Environmental Health Sciences. Currently, efficient reuse of data is hampered by problems related to public data availability, data quality, database interoperability (the ability to exchange information), standardization and sustainability. At the workshop, experts from universities and research institutes presented databases, studies, organizations and tools that attempt to deal with these problems. Furthermore, a case study showing that combining toxicogenomics data from multiple resources leads to more accurate predictions in risk assessment was presented. All participants agreed that there is a need for a web portal describing the diverse, heterogeneous data resources relevant for toxicogenomics research. Furthermore, there was agreement that linking more data resources would improve toxicogenomics data analysis. To outline a roadmap to enhance interoperability between data resources, the participants recommend collecting user stories from the toxicogenomics research community on barriers in data sharing and exchange currently hampering answering to certain research questions. These user stories may guide the prioritization of steps to be taken for enhancing integration of toxicogenomics databases.
The virtual microscopy database-sharing digital microscope images for research and education.
Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael
2018-02-14
Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.
Airoldi, Edoardo M.; Bai, Xue; Malin, Bradley A.
2011-01-01
We live in an increasingly mobile world, which leads to the duplication of information across domains. Though organizations attempt to obscure the identities of their constituents when sharing information for worthwhile purposes, such as basic research, the uncoordinated nature of such environment can lead to privacy vulnerabilities. For instance, disparate healthcare providers can collect information on the same patient. Federal policy requires that such providers share “de-identified” sensitive data, such as biomedical (e.g., clinical and genomic) records. But at the same time, such providers can share identified information, devoid of sensitive biomedical data, for administrative functions. On a provider-by-provider basis, the biomedical and identified records appear unrelated, however, links can be established when multiple providers’ databases are studied jointly. The problem, known as trail disclosure, is a generalized phenomenon and occurs because an individual’s location access pattern can be matched across the shared databases. Due to technical and legal constraints, it is often difficult to coordinate between providers and thus it is critical to assess the disclosure risk in distributed environments, so that we can develop techniques to mitigate such risks. Research on privacy protection has so far focused on developing technologies to suppress or encrypt identifiers associated with sensitive information. There is growing body of work on the formal assessment of the disclosure risk of database entries in publicly shared databases, but a less attention has been paid to the distributed setting. In this research, we review the trail disclosure problem in several domains with known vulnerabilities and show that disclosure risk is influenced by the distribution of how people visit service providers. Based on empirical evidence, we propose an entropy metric for assessing such risk in shared databases prior to their release. This metric assesses risk by leveraging the statistical characteristics of a visit distribution, as opposed to person-level data. It is computationally efficient and superior to existing risk assessment methods, which rely on ad hoc assessment that are often computationally expensive and unreliable. We evaluate our approach on a range of location access patterns in simulated environments. Our results demonstrate the approach is effective at estimating trail disclosure risks and the amount of self-information contained in a distributed system is one of the main driving factors. PMID:21647242
Wullenweber, Andrea; Kroner, Oliver; Kohrman, Melissa; Maier, Andrew; Dourson, Michael; Rak, Andrew; Wexler, Philip; Tomljanovic, Chuck
2008-11-15
The rate of chemical synthesis and use has outpaced the development of risk values and the resolution of risk assessment methodology questions. In addition, available risk values derived by different organizations may vary due to scientific judgments, mission of the organization, or use of more recently published data. Further, each organization derives values for a unique chemical list so it can be challenging to locate data on a given chemical. Two Internet resources are available to address these issues. First, the International Toxicity Estimates for Risk (ITER) database (www.tera.org/iter) provides chronic human health risk assessment data from a variety of organizations worldwide in a side-by-side format, explains differences in risk values derived by different organizations, and links directly to each organization's website for more detailed information. It is also the only database that includes risk information from independent parties whose risk values have undergone independent peer review. Second, the Risk Information Exchange (RiskIE) is a database of in progress chemical risk assessment work, and includes non-chemical information related to human health risk assessment, such as training modules, white papers and risk documents. RiskIE is available at http://www.allianceforrisk.org/RiskIE.htm, and will join ITER on National Library of Medicine's TOXNET (http://toxnet.nlm.nih.gov/). Together, ITER and RiskIE provide risk assessors essential tools for easily identifying and comparing available risk data, for sharing in progress assessments, and for enhancing interaction among risk assessment groups to decrease duplication of effort and to harmonize risk assessment procedures across organizations.
Corpas, Manuel; Jimenez, Rafael C.; Bongcam-Rudloff, Erik; Budd, Aidan; Brazas, Michelle D.; Fernandes, Pedro L.; Gaeta, Bruno; van Gelder, Celia; Korpelainen, Eija; Lewitter, Fran; McGrath, Annette; MacLean, Daniel; Palagi, Patricia M.; Rother, Kristian; Taylor, Jan; Via, Allegra; Watson, Mick; Schneider, Maria Victoria; Attwood, Teresa K.
2015-01-01
Summary: Rapid technological advances have led to an explosion of biomedical data in recent years. The pace of change has inspired new collaborative approaches for sharing materials and resources to help train life scientists both in the use of cutting-edge bioinformatics tools and databases and in how to analyse and interpret large datasets. A prototype platform for sharing such training resources was recently created by the Bioinformatics Training Network (BTN). Building on this work, we have created a centralized portal for sharing training materials and courses, including a catalogue of trainers and course organizers, and an announcement service for training events. For course organizers, the portal provides opportunities to promote their training events; for trainers, the portal offers an environment for sharing materials, for gaining visibility for their work and promoting their skills; for trainees, it offers a convenient one-stop shop for finding suitable training resources and identifying relevant training events and activities locally and worldwide. Availability and implementation: http://mygoblet.org/training-portal Contact: manuel.corpas@tgac.ac.uk PMID:25189782
Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.
Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià
2010-01-01
The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.
Flow experience in teams: The role of shared leadership.
Aubé, Caroline; Rousseau, Vincent; Brunelle, Eric
2018-04-01
The present study tests a multilevel mediation model concerning the effect of shared leadership on team members' flow experience. Specifically, we investigate the mediating role of teamwork behaviors in the relationships between 2 complementary indicators of shared leadership (i.e., density and centralization) and flow. Based on a multisource approach, we collected data through observation and survey of 111 project teams (521 individuals) made up of university students participating in a project management simulation. The results show that density and centralization have both an additive effect and an interaction effect on teamwork behaviors, such that the relationship between density and teamwork behaviors is stronger when centralization is low. In addition, teamwork behaviors play a mediating role in the relationship between shared leadership and flow. Overall, the findings highlight the importance of promoting team-based shared leadership in organizations to favor the flow experience. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
EcoliWiki: a wiki-based community resource for Escherichia coli
McIntosh, Brenley K.; Renfro, Daniel P.; Knapp, Gwendowlyn S.; Lairikyengbam, Chanchala R.; Liles, Nathan M.; Niu, Lili; Supak, Amanda M.; Venkatraman, Anand; Zweifel, Adrienne E.; Siegele, Deborah A.; Hu, James C.
2012-01-01
EcoliWiki is the community annotation component of the PortEco (http://porteco.org; formerly EcoliHub) project, an online data resource that integrates information on laboratory strains of Escherichia coli, its phages, plasmids and mobile genetic elements. As one of the early adopters of the wiki approach to model organism databases, EcoliWiki was designed to not only facilitate community-driven sharing of biological knowledge about E. coli as a model organism, but also to be interoperable with other data resources. EcoliWiki content currently covers genes from five laboratory E. coli strains, 21 bacteriophage genomes, F plasmid and eight transposons. EcoliWiki integrates the Mediawiki wiki platform with other open-source software tools and in-house software development to extend how wikis can be used for model organism databases. EcoliWiki can be accessed online at http://ecoliwiki.net. PMID:22064863
ACLAME: a CLAssification of Mobile genetic Elements, update 2010.
Leplae, Raphaël; Lima-Mendez, Gipsi; Toussaint, Ariane
2010-01-01
The ACLAME database is dedicated to the collection, analysis and classification of sequenced mobile genetic elements (MGEs, in particular phages and plasmids). In addition to providing information on the MGEs content, classifications are available at various levels of organization. At the gene/protein level, families group similar sequences that are expected to share the same function. Families of four or more proteins are manually assigned with a functional annotation using the GeneOntology and the locally developed ontology MeGO dedicated to MGEs. At the genome level, evolutionary cohesive modules group sets of protein families shared among MGEs. At the population level, networks display the reticulate evolutionary relationships among MGEs. To increase the coverage of the phage sequence space, ACLAME version 0.4 incorporates 760 high-quality predicted prophages selected from the Prophinder database. Most of the data can be downloaded from the freely accessible ACLAME web site (http://aclame.ulb.ac.be). The BLAST interface for querying the database has been extended and numerous tools for in-depth analysis of the results have been added.
Open innovation: Towards sharing of data, models and workflows.
Conrado, Daniela J; Karlsson, Mats O; Romero, Klaus; Sarr, Céline; Wilkins, Justin J
2017-11-15
Sharing of resources across organisations to support open innovation is an old idea, but which is being taken up by the scientific community at increasing speed, concerning public sharing in particular. The ability to address new questions or provide more precise answers to old questions through merged information is among the attractive features of sharing. Increased efficiency through reuse, and increased reliability of scientific findings through enhanced transparency, are expected outcomes from sharing. In the field of pharmacometrics, efforts to publicly share data, models and workflow have recently started. Sharing of individual-level longitudinal data for modelling requires solving legal, ethical and proprietary issues similar to many other fields, but there are also pharmacometric-specific aspects regarding data formats, exchange standards, and database properties. Several organisations (CDISC, C-Path, IMI, ISoP) are working to solve these issues and propose standards. There are also a number of initiatives aimed at collecting disease-specific databases - Alzheimer's Disease (ADNI, CAMD), malaria (WWARN), oncology (PDS), Parkinson's Disease (PPMI), tuberculosis (CPTR, TB-PACTS, ReSeqTB) - suitable for drug-disease modelling. Organized sharing of pharmacometric executable model code and associated information has in the past been sparse, but a model repository (DDMoRe Model Repository) intended for the purpose has recently been launched. In addition several other services can facilitate model sharing more generally. Pharmacometric workflows have matured over the last decades and initiatives to more fully capture those applied to analyses are ongoing. In order to maximize both the impact of pharmacometrics and the knowledge extracted from clinical data, the scientific community needs to take ownership of and create opportunities for open innovation. Copyright © 2017 Elsevier B.V. All rights reserved.
The Cambridge Structural Database
Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.
2016-01-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719
The UCSC Genome Browser database: extensions and updates 2013.
Meyer, Laurence R; Zweig, Ann S; Hinrichs, Angie S; Karolchik, Donna; Kuhn, Robert M; Wong, Matthew; Sloan, Cricket A; Rosenbloom, Kate R; Roe, Greg; Rhead, Brooke; Raney, Brian J; Pohl, Andy; Malladi, Venkat S; Li, Chin H; Lee, Brian T; Learned, Katrina; Kirkup, Vanessa; Hsu, Fan; Heitner, Steve; Harte, Rachel A; Haeussler, Maximilian; Guruvadoo, Luvina; Goldman, Mary; Giardine, Belinda M; Fujita, Pauline A; Dreszer, Timothy R; Diekhans, Mark; Cline, Melissa S; Clawson, Hiram; Barber, Galt P; Haussler, David; Kent, W James
2013-01-01
The University of California Santa Cruz (UCSC) Genome Browser (http://genome.ucsc.edu) offers online public access to a growing database of genomic sequence and annotations for a wide variety of organisms. The Browser is an integrated tool set for visualizing, comparing, analysing and sharing both publicly available and user-generated genomic datasets. As of September 2012, genomic sequence and a basic set of annotation 'tracks' are provided for 63 organisms, including 26 mammals, 13 non-mammal vertebrates, 3 invertebrate deuterostomes, 13 insects, 6 worms, yeast and sea hare. In the past year 19 new genome assemblies have been added, and we anticipate releasing another 28 in early 2013. Further, a large number of annotation tracks have been either added, updated by contributors or remapped to the latest human reference genome. Among these are an updated UCSC Genes track for human and mouse assemblies. We have also introduced several features to improve usability, including new navigation menus. This article provides an update to the UCSC Genome Browser database, which has been previously featured in the Database issue of this journal.
The Cambridge Structural Database.
Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C
2016-04-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.
KaBOB: ontology-based semantic integration of biomedical databases.
Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E
2015-04-23
The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.
SSER: Species specific essential reactions database.
Labena, Abraham A; Ye, Yuan-Nong; Dong, Chuan; Zhang, Fa-Z; Guo, Feng-Biao
2017-04-19
Essential reactions are vital components of cellular networks. They are the foundations of synthetic biology and are potential candidate targets for antimetabolic drug design. Especially if a single reaction is catalyzed by multiple enzymes, then inhibiting the reaction would be a better option than targeting the enzymes or the corresponding enzyme-encoding gene. The existing databases such as BRENDA, BiGG, KEGG, Bio-models, Biosilico, and many others offer useful and comprehensive information on biochemical reactions. But none of these databases especially focus on essential reactions. Therefore, building a centralized repository for this class of reactions would be of great value. Here, we present a species-specific essential reactions database (SSER). The current version comprises essential biochemical and transport reactions of twenty-six organisms which are identified via flux balance analysis (FBA) combined with manual curation on experimentally validated metabolic network models. Quantitative data on the number of essential reactions, number of the essential reactions associated with their respective enzyme-encoding genes and shared essential reactions across organisms are the main contents of the database. SSER would be a prime source to obtain essential reactions data and related gene and metabolite information and it can significantly facilitate the metabolic network models reconstruction and analysis, and drug target discovery studies. Users can browse, search, compare and download the essential reactions of organisms of their interest through the website http://cefg.uestc.edu.cn/sser .
The Brain Database: A Multimedia Neuroscience Database for Research and Teaching
Wertheim, Steven L.
1989-01-01
The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made. This tool is intended to serve as one component of a system that would allow neuroscientists and clinicians 1) to represent clinical and experimental data within a common framework 2) to compare results precisely between experiments and among laboratories, 3) to use computing tools as an aid in collaborative work and 4) to contribute to a shared and accessible body of knowledge about the nervous system.
Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-03-01
Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.
2013-01-01
We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507
Carotenoids Database: structures, chemical fingerprints and distribution among organisms.
Yabuzaki, Junko
2017-01-01
To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.
Marsh, Erin; Anderson, Eric D.
2015-01-01
Three ore deposits databases from previous studies were evaluated and combined with new known mineral occurrences into one database, which can now be used to manage information about the known mineral occurrences of Mauritania. The Microsoft Access 2010 database opens with the list of tables and forms held within the database and a Switchboard control panel from which to easily navigate through the existing mineral deposit data and to enter data for new deposit locations. The database is a helpful tool for the organization of the basic information about the mineral occurrences of Mauritania. It is suggested the database be administered by a single operator in order to avoid data overlap and override that can result from shared real time data entry. It is proposed that the mineral occurrence database be used in concert with the geologic maps, geophysics and geochemistry datasets, as a publically advertised interface for the abundant geospatial information that the Mauritanian government can provide to interested parties.
Han, Joo Hun; Bartol, Kathryn M; Kim, Seongsu
2015-03-01
Drawing upon line-of-sight (Lawler, 1990, 2000; Murphy, 1999) as a unifying concept, we examine the cross-level influence of organizational use of individual pay-for-performance (PFP), theorizing that its impact on individual employees' performance-reward expectancy is boosted by the moderating effects of immediate group managers' contingent reward leadership and organizational use of profit-sharing. Performance-reward expectancy is then expected to mediate the interactive effects of individual PFP with contingent reward leadership and profit-sharing on employee job performance. Analyses of cross-organizational and cross-level data from 912 employees in 194 workgroups from 45 companies reveal that organizations' individual PFP was positively related to employees' performance-reward expectancy, which was strengthened when it was accompanied by higher levels of contingent reward leadership and profit-sharing. Also, performance-reward expectancy significantly transmitted the effects of individual PFP onto job performance under higher levels of contingent reward leadership and profit-sharing, thus delineating cross-level mediating and moderating processes by which organizations' individual PFP is linked to important individual-level employee outcomes. Several theoretical and practical implications are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Accident/Mishap Investigation System
NASA Technical Reports Server (NTRS)
Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian
2007-01-01
InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.
Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment
2013-06-01
architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation
Sharing and community curation of mass spectrometry data with GNPS
Nguyen, Don Duy; Watrous, Jeramie; Kapono, Clifford A; Luzzatto-Knaan, Tal; Porto, Carla; Bouslimani, Amina; Melnik, Alexey V; Meehan, Michael J; Liu, Wei-Ting; Crüsemann, Max; Boudreau, Paul D; Esquenazi, Eduardo; Sandoval-Calderón, Mario; Kersten, Roland D; Pace, Laura A; Quinn, Robert A; Duncan, Katherine R; Hsu, Cheng-Chih; Floros, Dimitrios J; Gavilan, Ronnie G; Kleigrewe, Karin; Northen, Trent; Dutton, Rachel J; Parrot, Delphine; Carlson, Erin E; Aigle, Bertrand; Michelsen, Charlotte F; Jelsbak, Lars; Sohlenkamp, Christian; Pevzner, Pavel; Edlund, Anna; McLean, Jeffrey; Piel, Jörn; Murphy, Brian T; Gerwick, Lena; Liaw, Chih-Chuang; Yang, Yu-Liang; Humpf, Hans-Ulrich; Maansson, Maria; Keyzers, Robert A; Sims, Amy C; Johnson, Andrew R.; Sidebottom, Ashley M; Sedio, Brian E; Klitgaard, Andreas; Larson, Charles B; P., Cristopher A Boya; Torres-Mendoza, Daniel; Gonzalez, David J; Silva, Denise B; Marques, Lucas M; Demarque, Daniel P; Pociute, Egle; O'Neill, Ellis C; Briand, Enora; Helfrich, Eric J. N.; Granatosky, Eve A; Glukhov, Evgenia; Ryffel, Florian; Houson, Hailey; Mohimani, Hosein; Kharbush, Jenan J; Zeng, Yi; Vorholt, Julia A; Kurita, Kenji L; Charusanti, Pep; McPhail, Kerry L; Nielsen, Kristian Fog; Vuong, Lisa; Elfeki, Maryam; Traxler, Matthew F; Engene, Niclas; Koyama, Nobuhiro; Vining, Oliver B; Baric, Ralph; Silva, Ricardo R; Mascuch, Samantha J; Tomasi, Sophie; Jenkins, Stefan; Macherla, Venkat; Hoffman, Thomas; Agarwal, Vinayak; Williams, Philip G; Dai, Jingqui; Neupane, Ram; Gurr, Joshua; Rodríguez, Andrés M. C.; Lamsa, Anne; Zhang, Chen; Dorrestein, Kathleen; Duggan, Brendan M; Almaliti, Jehad; Allard, Pierre-Marie; Phapale, Prasad; Nothias, Louis-Felix; Alexandrov, Theodore; Litaudon, Marc; Wolfender, Jean-Luc; Kyle, Jennifer E; Metz, Thomas O; Peryea, Tyler; Nguyen, Dac-Trung; VanLeer, Danielle; Shinn, Paul; Jadhav, Ajit; Müller, Rolf; Waters, Katrina M; Shi, Wenyuan; Liu, Xueting; Zhang, Lixin; Knight, Rob; Jensen, Paul R; Palsson, Bernhard O; Pogliano, Kit; Linington, Roger G; Gutiérrez, Marcelino; Lopes, Norberto P; Gerwick, William H; Moore, Bradley S; Dorrestein, Pieter C; Bandeira, Nuno
2017-01-01
The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry techniques are well-suited to high-throughput characterization of natural products, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social molecular networking (GNPS, http://gnps.ucsd.edu), an open-access knowledge base for community wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of ‘living data’ through continuous reanalysis of deposited data. PMID:27504778
Wang, Mingxun; Carver, Jeremy J; Phelan, Vanessa V; Sanchez, Laura M; Garg, Neha; Peng, Yao; Nguyen, Don Duy; Watrous, Jeramie; Kapono, Clifford A; Luzzatto-Knaan, Tal; Porto, Carla; Bouslimani, Amina; Melnik, Alexey V; Meehan, Michael J; Liu, Wei-Ting; Crüsemann, Max; Boudreau, Paul D; Esquenazi, Eduardo; Sandoval-Calderón, Mario; Kersten, Roland D; Pace, Laura A; Quinn, Robert A; Duncan, Katherine R; Hsu, Cheng-Chih; Floros, Dimitrios J; Gavilan, Ronnie G; Kleigrewe, Karin; Northen, Trent; Dutton, Rachel J; Parrot, Delphine; Carlson, Erin E; Aigle, Bertrand; Michelsen, Charlotte F; Jelsbak, Lars; Sohlenkamp, Christian; Pevzner, Pavel; Edlund, Anna; McLean, Jeffrey; Piel, Jörn; Murphy, Brian T; Gerwick, Lena; Liaw, Chih-Chuang; Yang, Yu-Liang; Humpf, Hans-Ulrich; Maansson, Maria; Keyzers, Robert A; Sims, Amy C; Johnson, Andrew R; Sidebottom, Ashley M; Sedio, Brian E; Klitgaard, Andreas; Larson, Charles B; P, Cristopher A Boya; Torres-Mendoza, Daniel; Gonzalez, David J; Silva, Denise B; Marques, Lucas M; Demarque, Daniel P; Pociute, Egle; O'Neill, Ellis C; Briand, Enora; Helfrich, Eric J N; Granatosky, Eve A; Glukhov, Evgenia; Ryffel, Florian; Houson, Hailey; Mohimani, Hosein; Kharbush, Jenan J; Zeng, Yi; Vorholt, Julia A; Kurita, Kenji L; Charusanti, Pep; McPhail, Kerry L; Nielsen, Kristian Fog; Vuong, Lisa; Elfeki, Maryam; Traxler, Matthew F; Engene, Niclas; Koyama, Nobuhiro; Vining, Oliver B; Baric, Ralph; Silva, Ricardo R; Mascuch, Samantha J; Tomasi, Sophie; Jenkins, Stefan; Macherla, Venkat; Hoffman, Thomas; Agarwal, Vinayak; Williams, Philip G; Dai, Jingqui; Neupane, Ram; Gurr, Joshua; Rodríguez, Andrés M C; Lamsa, Anne; Zhang, Chen; Dorrestein, Kathleen; Duggan, Brendan M; Almaliti, Jehad; Allard, Pierre-Marie; Phapale, Prasad; Nothias, Louis-Felix; Alexandrov, Theodore; Litaudon, Marc; Wolfender, Jean-Luc; Kyle, Jennifer E; Metz, Thomas O; Peryea, Tyler; Nguyen, Dac-Trung; VanLeer, Danielle; Shinn, Paul; Jadhav, Ajit; Müller, Rolf; Waters, Katrina M; Shi, Wenyuan; Liu, Xueting; Zhang, Lixin; Knight, Rob; Jensen, Paul R; Palsson, Bernhard O; Pogliano, Kit; Linington, Roger G; Gutiérrez, Marcelino; Lopes, Norberto P; Gerwick, William H; Moore, Bradley S; Dorrestein, Pieter C; Bandeira, Nuno
2016-08-09
The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry (MS) techniques are well-suited to high-throughput characterization of NP, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social Molecular Networking (GNPS; http://gnps.ucsd.edu), an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of 'living data' through continuous reanalysis of deposited data.
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Spatial Data Integration Using Ontology-Based Approach
NASA Astrophysics Data System (ADS)
Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.
2015-12-01
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Shared Web Information Systems for Heritage in Scotland and Wales - Flexibility in Partnership
NASA Astrophysics Data System (ADS)
Thomas, D.; McKeague, P.
2013-07-01
The Royal Commissions on the Ancient and Historical Monuments of Scotland and Wales were established in 1908 to investigate and record the archaeological and built heritage of their respective countries. The organisations have grown organically over the succeeding century, steadily developing their inventories and collections as card and paper indexes. Computerisation followed in the late 1980s and early 1990s, with RCAHMS releasing Canmore, an online searchable database, in 1998. Following a review of service provision in Wales, RCAHMW entered into partnership with RCAHMS in 2003 to deliver a database for their national inventories and collections. The resultant partnership enables both organisations to develop at their own pace whilst delivering efficiencies through a common experience and a shared IT infrastructure. Through innovative solutions the partnership has also delivered benefits to the wider historic environment community, providing online portals to a range of datasets, ultimately raising public awareness and appreciation of the heritage around them. Now celebrating its 10th year, Shared Web Information Systems for Heritage, or more simply SWISH, continues to underpin the work of both organisations in presenting information about the historic environment to the public.
Unification - An international aerospace information issue
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Lahr, Thomas F.
1992-01-01
Scientific and Technical Information (STI) represents the results of large investments in research and development (R&D) and the expertise of a nation and is a valuable resource. For more than four decades, NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. NASA obtains foreign materials through its international exchange relationships, continually increasing the comprehensiveness of the NASA Aerospace Database (NAD). The NAD is de facto the international aerospace database. This paper reviews current NASA goals and activities with a view toward maintaining compatibility among international aerospace information systems, eliminating duplication of effort, and sharing resources through international cooperation wherever possible.
Neuroimaging Data Sharing on the Neuroinformatics Database Platform
Book, Gregory A; Stevens, Michael; Assaf, Michal; Glahn, David; Pearlson, Godfrey D
2015-01-01
We describe the Neuroinformatics Database (NiDB), an open-source database platform for archiving, analysis, and sharing of neuroimaging data. Data from the multi-site projects Autism Brain Imaging Data Exchange (ABIDE), Bipolar-Schizophrenia Network on Intermediate Phenotypes parts one and two (B-SNIP1, B-SNIP2), and Monetary Incentive Delay task (MID) are available for download from the public instance of NiDB, with more projects sharing data as it becomes available. As demonstrated by making several large datasets available, NiDB is an extensible platform appropriately suited to archive and distribute shared neuroimaging data. PMID:25888923
BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
Public variant databases: liability?
Thorogood, Adrian; Cook-Deegan, Robert; Knoppers, Bartha Maria
2017-07-01
Public variant databases support the curation, clinical interpretation, and sharing of genomic data, thus reducing harmful errors or delays in diagnosis. As variant databases are increasingly relied on in the clinical context, there is concern that negligent variant interpretation will harm patients and attract liability. This article explores the evolving legal duties of laboratories, public variant databases, and physicians in clinical genomics and recommends a governance framework for databases to promote responsible data sharing.Genet Med advance online publication 15 December 2016.
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
Design Considerations for a Web-based Database System of ELISpot Assay in Immunological Research
Ma, Jingming; Mosmann, Tim; Wu, Hulin
2005-01-01
The enzyme-linked immunospot (ELISpot) assay has been a primary means in immunological researches (such as HIV-specific T cell response). Due to huge amount of data involved in ELISpot assay testing, the database system is needed for efficient data entry, easy retrieval, secure storage, and convenient data process. Besides, the NIH has recently issued a policy to promote the sharing of research data (see http://grants.nih.gov/grants/policy/data_sharing). The Web-based database system will be definitely benefit to data sharing among broad research communities. Here are some considerations for a database system of ELISpot assay (DBSEA). PMID:16779326
47 CFR 52.32 - Allocation of the shared costs of long-term number portability.
Code of Federal Regulations, 2012 CFR
2012-10-01
....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...
47 CFR 52.32 - Allocation of the shared costs of long-term number portability.
Code of Federal Regulations, 2010 CFR
2010-10-01
....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...
47 CFR 52.32 - Allocation of the shared costs of long-term number portability.
Code of Federal Regulations, 2011 CFR
2011-10-01
....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...
47 CFR 52.32 - Allocation of the shared costs of long-term number portability.
Code of Federal Regulations, 2014 CFR
2014-10-01
....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...
47 CFR 52.32 - Allocation of the shared costs of long-term number portability.
Code of Federal Regulations, 2013 CFR
2013-10-01
....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...
McCluskey, Kevin; Baker, Scott E.
2017-02-17
As model organisms filamentous fungi have been important since the beginning of modern biological inquiry and have benefitted from open data since the earliest genetic maps were shared. From early origins in simple Mendelian genetics of mating types, parasexual genetics of colony colour, and the foundational demonstration of the segregation of a nutritional requirement, the contribution of research systems utilising filamentous fungi has spanned the biochemical genetics era, through the molecular genetics era, and now are at the very foundation of diverse omics approaches to research and development. Fungal model organisms have come from most major taxonomic groups although Ascomycetemore » filamentous fungi have seen the most major sustained effort. In addition to the published material about filamentous fungi, shared molecular tools have found application in every area of fungal biology. Likewise, shared data has contributed to the success of model systems. Furthermore, the scale of data supporting research with filamentous fungi has grown by 10 to 12 orders of magnitude. From genetic to molecular maps, expression databases, and finally genome resources, the open and collaborative nature of the research communities has assured that the rising tide of data has lifted all of the research systems together.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCluskey, Kevin; Baker, Scott E.
As model organisms filamentous fungi have been important since the beginning of modern biological inquiry and have benefitted from open data since the earliest genetic maps were shared. From early origins in simple Mendelian genetics of mating types, parasexual genetics of colony colour, and the foundational demonstration of the segregation of a nutritional requirement, the contribution of research systems utilising filamentous fungi has spanned the biochemical genetics era, through the molecular genetics era, and now are at the very foundation of diverse omics approaches to research and development. Fungal model organisms have come from most major taxonomic groups although Ascomycetemore » filamentous fungi have seen the most major sustained effort. In addition to the published material about filamentous fungi, shared molecular tools have found application in every area of fungal biology. Likewise, shared data has contributed to the success of model systems. Furthermore, the scale of data supporting research with filamentous fungi has grown by 10 to 12 orders of magnitude. From genetic to molecular maps, expression databases, and finally genome resources, the open and collaborative nature of the research communities has assured that the rising tide of data has lifted all of the research systems together.« less
Public variant databases: liability?
Thorogood, Adrian; Cook-Deegan, Robert; Knoppers, Bartha Maria
2017-01-01
Public variant databases support the curation, clinical interpretation, and sharing of genomic data, thus reducing harmful errors or delays in diagnosis. As variant databases are increasingly relied on in the clinical context, there is concern that negligent variant interpretation will harm patients and attract liability. This article explores the evolving legal duties of laboratories, public variant databases, and physicians in clinical genomics and recommends a governance framework for databases to promote responsible data sharing. Genet Med advance online publication 15 December 2016 PMID:27977006
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
This is a manual for users of the Software Engineering and Ada Database (SEAD). SEAD was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities that are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce the duplication of effort while improving quality in the development of future software systems. The manual describes the organization of the data in SEAD, the user interface from logging in to logging out, and concludes with a ten chapter tutorial on how to use the information in SEAD. Two appendices provide quick reference for logging into SEAD and using the keyboard of an IBM 3270 or VT100 computer terminal.
Rostami, Reza; Nahm, Meredith; Pieper, Carl F
2009-04-01
Despite a pressing and well-documented need for better sharing of information on clinical trials data quality assurance methods, many research organizations remain reluctant to publish descriptions of and results from their internal auditing and quality assessment methods. We present findings from a review of a decade of internal data quality audits performed at the Duke Clinical Research Institute, a large academic research organization that conducts data management for a diverse array of clinical studies, both academic and industry-sponsored. In so doing, we hope to stimulate discussions that could benefit the wider clinical research enterprise by providing insight into methods of optimizing data collection and cleaning, ultimately helping patients and furthering essential research. We present our audit methodologies, including sampling methods, audit logistics, sample sizes, counting rules used for error rate calculations, and characteristics of audited trials. We also present database error rates as computed according to two analytical methods, which we address in detail, and discuss the advantages and drawbacks of two auditing methods used during this 10-year period. Our review of the DCRI audit program indicates that higher data quality may be achieved from a series of small audits throughout the trial rather than through a single large database audit at database lock. We found that error rates trended upward from year to year in the period characterized by traditional audits performed at database lock (1997-2000), but consistently trended downward after periodic statistical process control type audits were instituted (2001-2006). These increases in data quality were also associated with cost savings in auditing, estimated at 1000 h per year, or the efforts of one-half of a full time equivalent (FTE). Our findings are drawn from retrospective analyses and are not the result of controlled experiments, and may therefore be subject to unanticipated confounding. In addition, the scope and type of audits we examine here are specific to our institution, and our results may not be broadly generalizable. Use of statistical process control methodologies may afford advantages over more traditional auditing methods, and further research will be necessary to confirm the reliability and usability of such techniques. We believe that open and candid discussion of data quality assurance issues among academic and clinical research organizations will ultimately benefit the entire research community in the coming era of increased data sharing and re-use.
2002-09-01
ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Egov 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING / MONITORING...initiatives. The federal government has 55 databases that deal with security threats, but inter- agency access depends on establishing agreements through...which that information can be shared. True cooperation also will require government -wide commitment to enterprise architecture, integrated
The BioGRID interaction database: 2013 update.
Chatr-Aryamontri, Andrew; Breitkreutz, Bobby-Joe; Heinicke, Sven; Boucher, Lorrie; Winter, Andrew; Stark, Chris; Nixon, Julie; Ramage, Lindsay; Kolas, Nadine; O'Donnell, Lara; Reguly, Teresa; Breitkreutz, Ashton; Sellam, Adnane; Chen, Daici; Chang, Christie; Rust, Jennifer; Livstone, Michael; Oughtred, Rose; Dolinski, Kara; Tyers, Mike
2013-01-01
The Biological General Repository for Interaction Datasets (BioGRID: http//thebiogrid.org) is an open access archive of genetic and protein interactions that are curated from the primary biomedical literature for all major model organism species. As of September 2012, BioGRID houses more than 500 000 manually annotated interactions from more than 30 model organisms. BioGRID maintains complete curation coverage of the literature for the budding yeast Saccharomyces cerevisiae, the fission yeast Schizosaccharomyces pombe and the model plant Arabidopsis thaliana. A number of themed curation projects in areas of biomedical importance are also supported. BioGRID has established collaborations and/or shares data records for the annotation of interactions and phenotypes with most major model organism databases, including Saccharomyces Genome Database, PomBase, WormBase, FlyBase and The Arabidopsis Information Resource. BioGRID also actively engages with the text-mining community to benchmark and deploy automated tools to expedite curation workflows. BioGRID data are freely accessible through both a user-defined interactive interface and in batch downloads in a wide variety of formats, including PSI-MI2.5 and tab-delimited files. BioGRID records can also be interrogated and analyzed with a series of new bioinformatics tools, which include a post-translational modification viewer, a graphical viewer, a REST service and a Cytoscape plugin.
The Native Plant Propagation Protocol Database: 16 years of sharing information
R. Kasten Dumroese; Thomas D. Landis
2016-01-01
The Native Plant Propagation Protocol Database was launched in 2001 to provide an online mechanism for sharing information about growing native plants. It relies on plant propagators to upload their protocols (detailed directions for growing particular native plants) so that others may benefit from their experience. Currently the database has nearly 3000 protocols and...
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.
Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/.
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean
Pagliosa, Paulo R.; Doria, João G.; Misturini, Dairana; Otegui, Mariana B. P.; Oortman, Mariana S.; Weis, Wilson A.; Faroni-Perez, Larisse; Alves, Alexandre P.; Camargo, Maurício G.; Amaral, A. Cecília Z.; Marques, Antonio C.; Lana, Paulo C.
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/ PMID:24573879
ERIC Educational Resources Information Center
Li, Rui; Liu, Min
2007-01-01
The purpose of this study is to examine the potential of using computer databases as cognitive tools to share learners' cognitive load and facilitate learning in a multimedia problem-based learning (PBL) environment designed for sixth graders. Two research questions were: (a) can the computer database tool share sixth-graders' cognitive load? and…
An international aerospace information system: A cooperative opportunity
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Blados, Walter R.
1992-01-01
Scientific and technical information (STI) is a valuable resource which represents the results of large investments in research and development (R&D), and the expertise of a nation. NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. We see information and information systems changing and becoming more international in scope. In Europe, consistent with joint R&D programs and a view toward a united Europe, we have seen the emergence of a European Aerospace Database concept. In addition, the development of aeronautics and astronautics in individual nations have also lead to initiatives for national aerospace databases. Considering recent technological developments in information science and technology, as well as the reality of scarce resources in all nations, it is time to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing. The new possibilities offered through cooperation among the various aerospace database efforts toward an international aerospace database initiative which can optimize the cost/benefit equation for all participants are considered.
BioMart Central Portal: an open database network for the biological community.
Guberman, Jonathan M; Ai, J; Arnaiz, O; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J; Di Génova, A; Forbes, Simon; Fujisawa, T; Gadaleta, E; Goodstein, D M; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D T; Wong-Erasmus, Marie; Yao, L; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities.
P2P proteomics -- data sharing for enhanced protein identification
2012-01-01
Background In order to tackle the important and challenging problem in proteomics of identifying known and new protein sequences using high-throughput methods, we propose a data-sharing platform that uses fully distributed P2P technologies to share specifications of peer-interaction protocols and service components. By using such a platform, information to be searched is no longer centralised in a few repositories but gathered from experiments in peer proteomics laboratories, which can subsequently be searched by fellow researchers. Methods The system distributively runs a data-sharing protocol specified in the Lightweight Communication Calculus underlying the system through which researchers interact via message passing. For this, researchers interact with the system through particular components that link to database querying systems based on BLAST and/or OMSSA and GUI-based visualisation environments. We have tested the proposed platform with data drawn from preexisting MS/MS data reservoirs from the 2006 ABRF (Association of Biomolecular Resource Facilities) test sample, which was extensively tested during the ABRF Proteomics Standards Research Group 2006 worldwide survey. In particular we have taken the data available from a subset of proteomics laboratories of Spain's National Institute for Proteomics, ProteoRed, a network for the coordination, integration and development of the Spanish proteomics facilities. Results and Discussion We performed queries against nine databases including seven ProteoRed proteomics laboratories, the NCBI Swiss-Prot database and the local database of the CSIC/UAB Proteomics Laboratory. A detailed analysis of the results indicated the presence of a protein that was supported by other NCBI matches and highly scored matches in several proteomics labs. The analysis clearly indicated that the protein was a relatively high concentrated contaminant that could be present in the ABRF sample. This fact is evident from the information that could be derived from the proposed P2P proteomics system, however it is not straightforward to arrive to the same conclusion by conventional means as it is difficult to discard organic contamination of samples. The actual presence of this contaminant was only stated after the ABRF study of all the identifications reported by the laboratories. PMID:22293032
McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta
2016-01-01
BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena
Tohsato, Yukako; Ho, Kenneth H. L.; Kyoda, Koji; Onami, Shuichi
2016-01-01
Motivation: Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. Results: We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus. The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. Availability and Implementation: SSBD is accessible at http://ssbd.qbic.riken.jp. Contact: sonami@riken.jp PMID:27412095
Pathway Tools version 13.0: integrated software for pathway/genome informatics and systems biology
Paley, Suzanne M.; Krummenacker, Markus; Latendresse, Mario; Dale, Joseph M.; Lee, Thomas J.; Kaipa, Pallavi; Gilham, Fred; Spaulding, Aaron; Popescu, Liviu; Altman, Tomer; Paulsen, Ian; Keseler, Ingrid M.; Caspi, Ron
2010-01-01
Pathway Tools is a production-quality software environment for creating a type of model-organism database called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc integrates the evolving understanding of the genes, proteins, metabolic network and regulatory network of an organism. This article provides an overview of Pathway Tools capabilities. The software performs multiple computational inferences including prediction of metabolic pathways, prediction of metabolic pathway hole fillers and prediction of operons. It enables interactive editing of PGDBs by DB curators. It supports web publishing of PGDBs, and provides a large number of query and visualization tools. The software also supports comparative analyses of PGDBs, and provides several systems biology analyses of PGDBs including reachability analysis of metabolic networks, and interactive tracing of metabolites through a metabolic network. More than 800 PGDBs have been created using Pathway Tools by scientists around the world, many of which are curated DBs for important model organisms. Those PGDBs can be exchanged using a peer-to-peer DB sharing system called the PGDB Registry. PMID:19955237
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena.
Tohsato, Yukako; Ho, Kenneth H L; Kyoda, Koji; Onami, Shuichi
2016-11-15
Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. SSBD is accessible at http://ssbd.qbic.riken.jp CONTACT: sonami@riken.jp. © The Author 2016. Published by Oxford University Press.
NIST Gas Hydrate Research Database and Web Dissemination Channel.
Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D
2010-01-01
To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, J.M.
1994-12-31
WhaleNet has established a network where students, educators, and scientists can interact and share data for use in interdisciplinary curricular and student research activities in classrooms around the world by utilizing telecommunication. This program enables students to participate in marine/whale research programs in real-time with WhaleNet data and supplementary curriculum materials regardless of their geographic location. Systems have been established with research organizations and whale watch companies whereby research data is posted by scientists and students participating in whale watches on the WhaleNet bulletin board and shared with participating classrooms. WhaleNet presently has contacts with classrooms across the nation, andmore » with research groups, whale watch organizations, science museums, and universities from Alaska to North Carolina, Hawaii to Maine, and Belize to Norway. WhaleNet has plans to make existing whale and fisheries research databases available for classroom use and to have research data from satellite tagging programs on various species of whales available for classroom access in real-time.« less
High-resolution digital brain atlases: a Hubble telescope for the brain.
Jones, Edward G; Stone, James M; Karten, Harvey J
2011-05-01
We describe implementation of a method for digitizing at microscopic resolution brain tissue sections containing normal and experimental data and for making the content readily accessible online. Web-accessible brain atlases and virtual microscopes for online examination can be developed using existing computer and internet technologies. Resulting databases, made up of hierarchically organized, multiresolution images, enable rapid, seamless navigation through the vast image datasets generated by high-resolution scanning. Tools for visualization and annotation of virtual microscope slides enable remote and universal data sharing. Interactive visualization of a complete series of brain sections digitized at subneuronal levels of resolution offers fine grain and large-scale localization and quantification of many aspects of neural organization and structure. The method is straightforward and replicable; it can increase accessibility and facilitate sharing of neuroanatomical data. It provides an opportunity for capturing and preserving irreplaceable, archival neurohistological collections and making them available to all scientists in perpetuity, if resources could be obtained from hitherto uninterested agencies of scientific support. © 2011 New York Academy of Sciences.
Rostami, Reza; Nahm, Meredith; Pieper, Carl F.
2011-01-01
Background Despite a pressing and well-documented need for better sharing of information on clinical trials data quality assurance methods, many research organizations remain reluctant to publish descriptions of and results from their internal auditing and quality assessment methods. Purpose We present findings from a review of a decade of internal data quality audits performed at the Duke Clinical Research Institute, a large academic research organization that conducts data management for a diverse array of clinical studies, both academic and industry-sponsored. In so doing, we hope to stimulate discussions that could benefit the wider clinical research enterprise by providing insight into methods of optimizing data collection and cleaning, ultimately helping patients and furthering essential research. Methods We present our audit methodologies, including sampling methods, audit logistics, sample sizes, counting rules used for error rate calculations, and characteristics of audited trials. We also present database error rates as computed according to two analytical methods, which we address in detail, and discuss the advantages and drawbacks of two auditing methods used during this ten-year period. Results Our review of the DCRI audit program indicates that higher data quality may be achieved from a series of small audits throughout the trial rather than through a single large database audit at database lock. We found that error rates trended upward from year to year in the period characterized by traditional audits performed at database lock (1997–2000), but consistently trended downward after periodic statistical process control type audits were instituted (2001–2006). These increases in data quality were also associated with cost savings in auditing, estimated at 1000 hours per year, or the efforts of one-half of a full time equivalent (FTE). Limitations Our findings are drawn from retrospective analyses and are not the result of controlled experiments, and may therefore be subject to unanticipated confounding. In addition, the scope and type of audits we examine here are specific to our institution, and our results may not be broadly generalizable. Conclusions Use of statistical process control methodologies may afford advantages over more traditional auditing methods, and further research will be necessary to confirm the reliability and usability of such techniques. We believe that open and candid discussion of data quality assurance issues among academic and clinical research organizations will ultimately benefit the entire research community in the coming era of increased data sharing and re-use. PMID:19342467
Making open data work for plant scientists.
Leonelli, Sabina; Smirnoff, Nicholas; Moore, Jonathan; Cook, Charis; Bastow, Ruth
2013-11-01
Despite the clear demand for open data sharing, its implementation within plant science is still limited. This is, at least in part, because open data-sharing raises several unanswered questions and challenges to current research practices. In this commentary, some of the challenges encountered by plant researchers at the bench when generating, interpreting, and attempting to disseminate their data have been highlighted. The difficulties involved in sharing sequencing, transcriptomics, proteomics, and metabolomics data are reviewed. The benefits and drawbacks of three data-sharing venues currently available to plant scientists are identified and assessed: (i) journal publication; (ii) university repositories; and (iii) community and project-specific databases. It is concluded that community and project-specific databases are the most useful to researchers interested in effective data sharing, since these databases are explicitly created to meet the researchers' needs, support extensive curation, and embody a heightened awareness of what it takes to make data reuseable by others. Such bottom-up and community-driven approaches need to be valued by the research community, supported by publishers, and provided with long-term sustainable support by funding bodies and government. At the same time, these databases need to be linked to generic databases where possible, in order to be discoverable to the majority of researchers and thus promote effective and efficient data sharing. As we look forward to a future that embraces open access to data and publications, it is essential that data policies, data curation, data integration, data infrastructure, and data funding are linked together so as to foster data access and research productivity.
Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.
2017-01-01
Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
Manheim, F.T.; Buchholtz ten Brink, Marilyn R.; Mecray, E.L.
1998-01-01
A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1955 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray literature). Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations of Cu, Hg, and Zn of 40 to 60% over a 17-year period.A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1995 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations Cu, Hg, and Zn of 40 to 60% over a 17-year period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, G.; Kuznetsov, V.; Evans, D.
We present the Data Aggregation System, a system for information retrieval and aggregation from heterogenous sources of relational and non-relational data for the Compact Muon Solenoid experiment on the CERN Large Hadron Collider. The experiment currently has a number of organically-developed data sources, including front-ends to a number of different relational databases and non-database data services which do not share common data structures or APIs (Application Programming Interfaces), and cannot at this stage be readily converged. DAS provides a single interface for querying all these services, a caching layer to speed up access to expensive underlying calls and the abilitymore » to merge records from different data services pertaining to a single primary key.« less
The Effect of Share 35 on Biliary Complications: an Interrupted Time Series Analysis.
Fleming, J N; Taber, D J; Axelrod, D; Chavin, K D
2018-05-16
The purpose of the Share 35 allocation policy was to improve liver transplant waitlist mortality, targeting high MELD waitlisted patients. However, policy changes may also have unintended consequences that must be balanced with the primary desired outcome. We performed an interrupted time series assessing the impact of Share 35 on biliary complications in a select national liver transplant population using the Vizient CDB/RM ™ database. Liver transplants that occurred between October 2012 and September 2015 were included. There was a significant change in the incident-rate of biliary complications between Pre-Share 35 (n=3,018) and Post-Share 35 (n=9,984) cohorts over time (p=0.023, r2=0.44). As a control, a subanalysis was performed throughout the same time period in Region 9 transplant centers, where a broad sharing agreement had previously been implemented. In the subanalysis, there was no change in the incident-rate of biliary complications between the two time periods. Length of stay and mean direct cost demonstrated a change after implementation of Share 35, although they did not meet statistical difference. While the target of improved waitlist mortality is of utmost importance for the equitable allocation of organs, unintended consequences of policy changes should be studied for a full assessment of a policy's impact. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Grouping and binding in visual short-term memory.
Quinlan, Philip T; Cohen, Dale J
2012-09-01
Findings of 2 experiments are reported that challenge the current understanding of visual short-term memory (VSTM). In both experiments, a single study display, containing 6 colored shapes, was presented briefly and then probed with a single colored shape. At stake is how VSTM retains a record of different objects that share common features: In the 1st experiment, 2 study items sometimes shared a common feature (either a shape or a color). The data revealed a color sharing effect, in which memory was much better for items that shared a common color than for items that did not. The 2nd experiment showed that the size of the color sharing effect depended on whether a single pair of items shared a common color or whether 2 pairs of items were so defined-memory for all items improved when 2 color groups were presented. In explaining performance, an account is advanced in which items compete for a fixed number of slots, but then memory recall for any given stored item is prone to error. A critical assumption is that items that share a common color are stored together in a slot as a chunk. The evidence provides further support for the idea that principles of perceptual organization may determine the manner in which items are stored in VSTM. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Web-based Electronic Sharing and RE-allocation of Assets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverett, Dave; Miller, Robert A.; Berlin, Gary J.
2002-09-09
The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less
Kuchinke, Wolfgang; Krauth, Christian; Bergmann, René; Karakoyun, Töresin; Woollard, Astrid; Schluender, Irene; Braasch, Benjamin; Eckert, Martin; Ohmann, Christian
2016-07-07
In an unprecedented rate data in the life sciences is generated and stored in many different databases. An ever increasing part of this data is human health data and therefore falls under data protected by legal regulations. As part of the BioMedBridges project, which created infrastructures that connect more than 10 ESFRI research infrastructures (RI), the legal and ethical prerequisites of data sharing were examined employing a novel and pragmatic approach. We employed concepts from computer science to create legal requirement clusters that enable legal interoperability between databases for the areas of data protection, data security, Intellectual Property (IP) and security of biosample data. We analysed and extracted access rules and constraints from all data providers (databases) involved in the building of data bridges covering many of Europe's most important databases. These requirement clusters were applied to five usage scenarios representing the data flow in different data bridges: Image bridge, Phenotype data bridge, Personalised medicine data bridge, Structural data bridge, and Biosample data bridge. A matrix was built to relate the important concepts from data protection regulations (e.g. pseudonymisation, identifyability, access control, consent management) with the results of the requirement clusters. An interactive user interface for querying the matrix for requirements necessary for compliant data sharing was created. To guide researchers without the need for legal expert knowledge through legal requirements, an interactive tool, the Legal Assessment Tool (LAT), was developed. LAT provides researchers interactively with a selection process to characterise the involved types of data and databases and provides suitable requirements and recommendations for concrete data access and sharing situations. The results provided by LAT are based on an analysis of the data access and sharing conditions for different kinds of data of major databases in Europe. Data sharing for research purposes must be opened for human health data and LAT is one of the means to achieve this aim. In summary, LAT provides requirements in an interactive way for compliant data access and sharing with appropriate safeguards, restrictions and responsibilities by introducing a culture of responsibility and data governance when dealing with human data.
A general temporal data model and the structured population event history register
Clark, Samuel J.
2010-01-01
At this time there are 37 demographic surveillance system sites active in sub-Saharan Africa, Asia and Central America, and this number is growing continuously. These sites and other longitudinal population and health research projects generate large quantities of complex temporal data in order to describe, explain and investigate the event histories of individuals and the populations they constitute. This article presents possible solutions to some of the key data management challenges associated with those data. The fundamental components of a temporal system are identified and both they and their relationships to each other are given simple, standardized definitions. Further, a metadata framework is proposed to endow this abstract generalization with specific meaning and to bind the definitions of the data to the data themselves. The result is a temporal data model that is generalized, conceptually tractable, and inherently contains a full description of the primary data it organizes. Individual databases utilizing this temporal data model can be customized to suit the needs of their operators without modifying the underlying design of the database or sacrificing the potential to transparently share compatible subsets of their data with other similar databases. A practical working relational database design based on this general temporal data model is presented and demonstrated. This work has arisen out of experience with demographic surveillance in the developing world, and although the challenges and their solutions are more general, the discussion is organized around applications in demographic surveillance. An appendix contains detailed examples and working prototype databases that implement the examples discussed in the text. PMID:20396614
Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo
2015-01-01
Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
Data Sharing in Astrobiology: The Astrobiology Habitable Environments Database (AHED)
NASA Technical Reports Server (NTRS)
Lafuente, B.; Bristow, T.; Stone, N.; Pires, A.; Keller, R.; Downs, Robert; Blake, D.; Fonda, M.
2017-01-01
Astrobiology is a multidisciplinary area of scientific research focused on studying the origins of life on Earth and the conditions under which life might have emerged elsewhere in the universe. NASA uses the results of Astrobiology research to help define targets for future missions that are searching for life elsewhere in the universe. The understanding of complex questions in Astrobiology requires integration and analysis of data spanning a range of disciplines including biology, chemistry, geology, astronomy and planetary science. However, the lack of a centralized repository makes it difficult for Astrobiology teams to share data and benefit from resultant synergies. Moreover, in recent years, federal agencies are requiring that results of any federally funded scientific research must be available and useful for the public and the science community. The Astrobiology Habitable Environments Database (AHED), developed with a consolidated group of astrobiologists from different active research teams at NASA Ames Research Center, is designed to help to address these issues. AHED is a central, high-quality, long-term data repository for mineralogical, textural, morphological, inorganic and organic chemical, isotopic and other information pertinent to the advancement of the field of Astrobiology.
Data Sharing in Astrobiology: the Astrobiology Habitable Environments Database (AHED)
NASA Technical Reports Server (NTRS)
Lafuente, B.; Bristow, T.; Stone, N.; Pires, A.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.
2017-01-01
Astrobiology is a multidisciplinary area of scientific research focused on studying the origins of life on Earth and the conditions under which life might have emerged elsewhere in the universe. NASA uses the results of Astrobiology research to help define targets for future missions that are searching for life elsewhere in the universe. The understanding of complex questions in Astrobiology requires integration and analysis of data spanning a range of disciplines including biology, chemistry, geology, astronomy and planetary science. However, the lack of a centralized repository makes it difficult for Astrobiology teams to share data and benefit from resultant synergies. Moreover, in recent years, federal agencies are requiring that results of any federally funded scientific research must be available and useful for the public and the science community. The Astrobiology Habitable Environments Database (AHED), developed with a consolidated group of astrobiologists from different active research teams at NASA Ames Research Center, is designed to help to address these issues. AHED is a central, high-quality, long-term data repository for mineralogical, textural, morphological, inorganic and organic chemical, isotopic and other information pertinent to the advancement of the field of Astrobiology.
Private and Efficient Query Processing on Outsourced Genomic Databases.
Ghasemi, Reza; Al Aziz, Md Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian
2017-09-01
Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time consuming and expensive process. Second, it requires large-scale computation and storage systems to process genomic sequences. Third, genomic databases are often owned by different organizations, and thus, not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 Single Nucleotide Polymorphisms (SNPs) in a database of 20 000 records takes around 100 and 150 s, respectively.
Private and Efficient Query Processing on Outsourced Genomic Databases
Ghasemi, Reza; Al Aziz, Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian
2017-01-01
Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time-consuming and expensive process. Second, it requires large-scale computation and storage systems to processes genomic sequences. Third, genomic databases are often owned by different organizations and thus not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 SNPs in a database of 20,000 records takes around 100 and 150 seconds, respectively. PMID:27834660
The Plant Ontology: A Tool for Plant Genomics.
Cooper, Laurel; Jaiswal, Pankaj
2016-01-01
The use of controlled, structured vocabularies (ontologies) has become a critical tool for scientists in the post-genomic era of massive datasets. Adoption and integration of common vocabularies and annotation practices enables cross-species comparative analyses and increases data sharing and reusability. The Plant Ontology (PO; http://www.plantontology.org/ ) describes plant anatomy, morphology, and the stages of plant development, and offers a database of plant genomics annotations associated to the PO terms. The scope of the PO has grown from its original design covering only rice, maize, and Arabidopsis, and now includes terms to describe all green plants from angiosperms to green algae.This chapter introduces how the PO and other related ontologies are constructed and organized, including languages and software used for ontology development, and provides an overview of the key features. Detailed instructions illustrate how to search and browse the PO database and access the associated annotation data. Users are encouraged to provide input on the ontology through the online term request form and contribute datasets for integration in the PO database.
Integrating GIS, Archeology, and the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sera White; Brenda Ringe Pace; Randy Lee
2004-08-01
At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictivemore » modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.« less
ERIC Educational Resources Information Center
Ohland, Matthew W.; Long, Russell A.
2016-01-01
Sharing longitudinal student record data and merging data from different sources is critical to addressing important questions being asked of higher education. The Multiple-Institution Database for Investigating Engineering Longitudinal Development (MIDFIELD) is a multi-institution, longitudinal, student record level dataset that is used to answer…
The performance of disk arrays in shared-memory database machines
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Hong, Wei
1993-01-01
In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.
Xenbase: Core features, data acquisition, and data processing.
James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D
2015-08-01
Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less
SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.
Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan
2014-08-15
Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.
Taking control of your digital library: how modern citation managers do more than just referencing.
Mahajan, Amit K; Hogarth, D Kyle
2013-12-01
Physicians are constantly navigating the overwhelming body of medical literature available on the Internet. Although early citation managers were capable of limited searching of index databases and tedious bibliography production, modern versions of citation managers such as EndNote, Zotero, and Mendeley are powerful web-based tools for searching, organizing, and sharing medical literature. Effortless point-and-click functions provide physicians with the ability to develop robust digital libraries filled with literature relevant to their fields of interest. In addition to easily creating manuscript bibliographies, various citation managers allow physicians to readily access medical literature, share references for teaching purposes, collaborate with colleagues, and even participate in social networking. If physicians are willing to invest the time to familiarize themselves with modern citation managers, they will reap great benefits in the future.
NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data.
Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug
2016-01-01
The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data.
NASA Astrophysics Data System (ADS)
Thessen, Anne E.; McGinnis, Sean; North, Elizabeth W.
2016-02-01
Process studies and coupled-model validation efforts in geosciences often require integration of multiple data types across time and space. For example, improved prediction of hydrocarbon fate and transport is an important societal need which fundamentally relies upon synthesis of oceanography and hydrocarbon chemistry. Yet, there are no publically accessible databases which integrate these diverse data types in a georeferenced format, nor are there guidelines for developing such a database. The objective of this research was to analyze the process of building one such database to provide baseline information on data sources and data sharing and to document the challenges and solutions that arose during this major undertaking. The resulting Deepwater Horizon Database was approximately 2.4 GB in size and contained over 8 million georeferenced data points collected from industry, government databases, volunteer networks, and individual researchers. The major technical challenges that were overcome were reconciliation of terms, units, and quality flags which were necessary to effectively integrate the disparate data sets. Assembling this database required the development of relationships with individual researchers and data managers which often involved extensive e-mail contacts. The average number of emails exchanged per data set was 7.8. Of the 95 relevant data sets that were discovered, 38 (40%) were obtained, either in whole or in part. Over one third (36%) of the requests for data went unanswered. The majority of responses were received after the first request (64%) and within the first week of the first request (67%). Although fewer than half of the potentially relevant datasets were incorporated into the database, the level of sharing (40%) was high compared to some other disciplines where sharing can be as low as 10%. Our suggestions for building integrated databases include budgeting significant time for e-mail exchanges, being cognizant of the cost versus benefits of pursuing reticent data providers, and building trust through clear, respectful communication and with flexible and appropriate attributions.
Krueger, Charles C.; Holbrook, Christopher; Binder, Thomas R.; Vandergoot, Christopher; Hayden, Todd A.; Hondorp, Darryl W.; Nate, Nancy; Paige, Kelli; Riley, Stephen; Fisk, Aaron T.; Cooke, Steven J.
2017-01-01
The Great Lakes Acoustic Telemetry Observation System (GLATOS), organized in 2012, aims to advance and improve conservation and management of Great Lakes fishes by providing information on behavior, habitat use, and population dynamics. GLATOS faced challenges during establishment, including a funding agency-imposed urgency to initiate projects, a lack of telemetry expertise, and managing a flood of data. GLATOS now connects 190+ investigators, provides project consultation, maintains a web-based data portal, contributes data to Ocean Tracking Network’s global database, loans equipment, and promotes science transfer to managers. The GLATOS database currently has 50+ projects, 39 species tagged, 8000+ fish released, and 150+ million tag detections. Lessons learned include (1) seek advice from others experienced in telemetry; (2) organize networks prior to when shared data is urgently needed; (3) establish a data management system so that all receivers can contribute to every project; (4) hold annual meetings to foster relationships; (5) involve fish managers to ensure relevancy; and (6) staff require full-time commitment to lead and coordinate projects and to analyze data and publish results.
Li, Li; Brunk, Brian P.; Kissinger, Jessica C.; Pape, Deana; Tang, Keliang; Cole, Robert H.; Martin, John; Wylie, Todd; Dante, Mike; Fogarty, Steven J.; Howe, Daniel K.; Liberator, Paul; Diaz, Carmen; Anderson, Jennifer; White, Michael; Jerome, Maria E.; Johnson, Emily A.; Radke, Jay A.; Stoeckert, Christian J.; Waterston, Robert H.; Clifton, Sandra W.; Roos, David S.; Sibley, L. David
2003-01-01
Large-scale EST sequencing projects for several important parasites within the phylum Apicomplexa were undertaken for the purpose of gene discovery. Included were several parasites of medical importance (Plasmodium falciparum, Toxoplasma gondii) and others of veterinary importance (Eimeria tenella, Sarcocystis neurona, and Neospora caninum). A total of 55,192 ESTs, deposited into dbEST/GenBank, were included in the analyses. The resulting sequences have been clustered into nonredundant gene assemblies and deposited into a relational database that supports a variety of sequence and text searches. This database has been used to compare the gene assemblies using BLAST similarity comparisons to the public protein databases to identify putative genes. Of these new entries, ∼15%–20% represent putative homologs with a conservative cutoff of p < 10−9, thus identifying many conserved genes that are likely to share common functions with other well-studied organisms. Gene assemblies were also used to identify strain polymorphisms, examine stage-specific expression, and identify gene families. An interesting class of genes that are confined to members of this phylum and not shared by plants, animals, or fungi, was identified. These genes likely mediate the novel biological features of members of the Apicomplexa and hence offer great potential for biological investigation and as possible therapeutic targets. [The sequence data from this study have been submitted to dbEST division of GenBank under accession nos.: Toxoplasma gondii: –, –, –, –, – , –, –, –, –. Plasmodium falciparum: –, –, –, –. Sarcocystis neurona: , , , , , , , , , , , , , –, –, –, –, –. Eimeria tenella: –, –, –, –, –, –, –, –, – , –, –, –, –, –, –, –, –, –, –, –. Neospora caninum: –, –, , – , –, –.] PMID:12618375
Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco
2006-01-01
We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488
Increasing organ donation after cardiac death in trauma patients.
Joseph, Bellal; Khalil, Mazhar; Pandit, Viraj; Orouji Jokar, Tahereh; Cheaito, Ali; Kulvatunyou, Narong; Tang, Andrew; O'Keeffe, Terence; Vercruysse, Gary; Green, Donald J; Friese, Randall S; Rhee, Peter
2015-09-01
Organ donation after cardiac death (DCD) is not optimal but still remains a valuable source of organ donation in trauma donors. The aim of this study was to assess national trends in DCD from trauma patients. A 12-year (2002 to 2013) retrospective analysis of the United Network for Organ Sharing database was performed. Outcome measures were the following: proportion of DCD donors over the years and number and type of solid organs donated. DCD resulted in procurement of 16,248 solid organs from 8,724 donors. The number of organs donated per donor remained unchanged over the study period (P = .1). DCD increased significantly from 3.1% in 2002 to 14.6% in 2013 (P = .001). There was a significant increase in the proportion of kidney (2002: 3.4% vs 2013: 16.3%, P = .001) and liver (2002: 1.6% vs 2013: 5%, P = .041) donation among DCD donors over the study period. DCD from trauma donors provides a significant source of solid organs. The proportion of DCD donors increased significantly over the last 12 years. Copyright © 2015 Elsevier Inc. All rights reserved.
Uhlirova, Hana; Tian, Peifang; Kılıç, Kıvılcım; Thunemann, Martin; Sridhar, Vishnu B; Chmelik, Radim; Bartsch, Hauke; Dale, Anders M; Devor, Anna; Saisan, Payam A
2018-05-04
The importance of sharing experimental data in neuroscience grows with the amount and complexity of data acquired and various techniques used to obtain and process these data. However, the majority of experimental data, especially from individual studies of regular-sized laboratories never reach wider research community. A graphical user interface (GUI) engine called Neurovascular Network Explorer 2.0 (NNE 2.0) has been created as a tool for simple and low-cost sharing and exploring of vascular imaging data. NNE 2.0 interacts with a database containing optogenetically-evoked dilation/constriction time-courses of individual vessels measured in mice somatosensory cortex in vivo by 2-photon microscopy. NNE 2.0 enables selection and display of the time-courses based on different criteria (subject, branching order, cortical depth, vessel diameter, arteriolar tree) as well as simple mathematical manipulation (e.g. averaging, peak-normalization) and data export. It supports visualization of the vascular network in 3D and enables localization of the individual functional vessel diameter measurements within vascular trees. NNE 2.0, its source code, and the corresponding database are freely downloadable from UCSD Neurovascular Imaging Laboratory website 1 . The source code can be utilized by the users to explore the associated database or as a template for databasing and sharing their own experimental results provided the appropriate format.
Inborn errors of metabolism and the human interactome: a systems medicine approach.
Woidy, Mathias; Muntau, Ania C; Gersting, Søren W
2018-02-05
The group of inborn errors of metabolism (IEM) displays a marked heterogeneity and IEM can affect virtually all functions and organs of the human organism; however, IEM share that their associated proteins function in metabolism. Most proteins carry out cellular functions by interacting with other proteins, and thus are organized in biological networks. Therefore, diseases are rarely the consequence of single gene mutations but of the perturbations caused in the related cellular network. Systematic approaches that integrate multi-omics and database information into biological networks have successfully expanded our knowledge of complex disorders but network-based strategies have been rarely applied to study IEM. We analyzed IEM on a proteome scale and found that IEM-associated proteins are organized as a network of linked modules within the human interactome of protein interactions, the IEM interactome. Certain IEM disease groups formed self-contained disease modules, which were highly interlinked. On the other hand, we observed disease modules consisting of proteins from many different disease groups in the IEM interactome. Moreover, we explored the overlap between IEM and non-IEM disease genes and applied network medicine approaches to investigate shared biological pathways, clinical signs and symptoms, and links to drug targets. The provided resources may help to elucidate the molecular mechanisms underlying new IEM, to uncover the significance of disease-associated mutations, to identify new biomarkers, and to develop novel therapeutic strategies.
Global open data management in metabolomics.
Haug, Kenneth; Salek, Reza M; Steinbeck, Christoph
2017-02-01
Chemical Biology employs chemical synthesis, analytical chemistry and other tools to study biological systems. Recent advances in both molecular biology such as next generation sequencing (NGS) have led to unprecedented insights towards the evolution of organisms' biochemical repertoires. Because of the specific data sharing culture in Genomics, genomes from all kingdoms of life become readily available for further analysis by other researchers. While the genome expresses the potential of an organism to adapt to external influences, the Metabolome presents a molecular phenotype that allows us to asses the external influences under which an organism exists and develops in a dynamic way. Steady advancements in instrumentation towards high-throughput and highresolution methods have led to a revival of analytical chemistry methods for the measurement and analysis of the metabolome of organisms. This steady growth of metabolomics as a field is leading to a similar accumulation of big data across laboratories worldwide as can be observed in all of the other omics areas. This calls for the development of methods and technologies for handling and dealing with such large datasets, for efficiently distributing them and for enabling re-analysis. Here we describe the recently emerging ecosystem of global open-access databases and data exchange efforts between them, as well as the foundations and obstacles that enable or prevent the data sharing and reanalysis of this data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sansone, Susanna-Assunta; Rocca-Serra, Philippe
2012-07-12
There are thousands of biology databases with hundreds of terminologies, reporting guidelines, representations models, and exchange formats to help annotate, report, and share bioscience investigations. It is evident, however, that researchers and bioinformaticians struggle to navigate the various standards and to find the appropriate database to collect, manage, and share data. Further, policy makers, funders, and publishers lack sufficient information to formulate their guidelines. In this paper, we highlight a number of key issues that can be used to turn these challenges into new opportunities. It is time for all stakeholders to work together to reconcile cause and effect and make the data-sharing culture functional and efficient.
Shabani, Mahsa; Bezuidenhout, Louise; Borry, Pascal
2014-11-01
Introducing data sharing practices into the genomic research arena has challenged the current mechanisms established to protect rights of individuals and triggered policy considerations. To inform such policy deliberations, soliciting public and research participants' attitudes with respect to genomic data sharing is a necessity. The main electronic databases were searched in order to retrieve empirical studies, investigating the attitudes of research participants and the public towards genomic data sharing through public databases. In the 15 included studies, participants' attitudes towards genomic data sharing revealed the influence of a constellation of interrelated factors, including the personal perceptions of controllability and sensitivity of data, potential risks and benefits of data sharing at individual and social level and also governance level considerations. This analysis indicates that future policy responses and recruitment practices should be attentive to a wide variety of concerns in order to promote both responsible and progressive research.
Hierarchical data security in a Query-By-Example interface for a shared database.
Taylor, Merwyn
2002-06-01
Whenever a shared database resource, containing critical patient data, is created, protecting the contents of the database is a high priority goal. This goal can be achieved by developing a Query-By-Example (QBE) interface, designed to access a shared database, and embedding within the QBE a hierarchical security module that limits access to the data. The security module ensures that researchers working in one clinic do not get access to data from another clinic. The security can be based on a flexible taxonomy structure that allows ordinary users to access data from individual clinics and super users to access data from all clinics. All researchers submit queries through the same interface and the security module processes the taxonomy and user identifiers to limit access. Using this system, two different users with different access rights can submit the same query and get different results thus reducing the need to create different interfaces for different clinics and access rights.
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
Research on Information Sharing Mechanism of Network Organization Based on Evolutionary Game
NASA Astrophysics Data System (ADS)
Wang, Lin; Liu, Gaozhi
2018-02-01
This article first elaborates the concept and effect of network organization, and the ability to share information is analyzed, secondly introduces the evolutionary game theory, network organization for information sharing all kinds of limitations, establishes the evolutionary game model, analyzes the dynamic evolution of network organization of information sharing, through reasoning and evolution. The network information sharing by the initial state and two sides of the game payoff matrix of excess profits and information is the information sharing of cost and risk sharing are the influence of network organization node information sharing decision.
2009-03-01
37 Figure 8 New Information Sharing Model from United States Intelligence Community Information Sharing...PRIDE while the Coast Guard has MISSLE and the newly constructed WATCHKEEPER. All these databases contain intelligence on incoming vessels...decisions making. Experts rely heavily on future projections as hallmarks of skilled performance." (Endsley et al. 2006) The SA model above
NASA Astrophysics Data System (ADS)
Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.
2015-12-01
Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions
Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.
2014-01-01
Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
76 FR 55000 - Notice of Agricultural Management Assistance Organic Certification Cost-Share Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
...] Notice of Agricultural Management Assistance Organic Certification Cost-Share Program AGENCY... Departments of Agriculture for the Agricultural Management Assistance Organic Certification Cost-Share Program... organic certification cost-share funds. The AMS has allocated $1.5 million for this organic certification...
78 FR 5164 - Notice of Agricultural Management Assistance Organic Certification Cost-Share Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-24
...] Notice of Agricultural Management Assistance Organic Certification Cost-Share Program AGENCY... Departments of Agriculture for the Agricultural Management Assistance Organic Certification Cost-Share Program... organic certification cost-share funds. The AMS has allocated $1.425 million for this organic...
Sollie, Annet; Sijmons, Rolf H; Lindhout, Dick; van der Ploeg, Ans T; Rubio Gozalbo, M Estela; Smit, G Peter A; Verheijen, Frans; Waterham, Hans R; van Weely, Sonja; Wijburg, Frits A; Wijburg, Rudolph; Visser, Gepke
2013-07-01
Data sharing is essential for a better understanding of genetic disorders. Good phenotype coding plays a key role in this process. Unfortunately, the two most widely used coding systems in medicine, ICD-10 and SNOMED-CT, lack information necessary for the detailed classification and annotation of rare and genetic disorders. This prevents the optimal registration of such patients in databases and thus data-sharing efforts. To improve care and to facilitate research for patients with metabolic disorders, we developed a new coding system for metabolic diseases with a dedicated group of clinical specialists. Next, we compared the resulting codes with those in ICD and SNOMED-CT. No matches were found in 76% of cases in ICD-10 and in 54% in SNOMED-CT. We conclude that there are sizable gaps in the SNOMED-CT and ICD coding systems for metabolic disorders. There may be similar gaps for other classes of rare and genetic disorders. We have demonstrated that expert groups can help in addressing such coding issues. Our coding system has been made available to the ICD and SNOMED-CT organizations as well as to the Orphanet and HPO organizations for further public application and updates will be published online (www.ddrmd.nl and www.cineas.org). © 2013 WILEY PERIODICALS, INC.
Sampaio, Marcelo S; Chopra, Bhavna; Tang, Amy; Sureshkumar, Kalathil K
2018-07-01
The new kidney allocation system recommends local and regional sharing of deceased donor kidneys (DDK) with 86-100% Kidney Donor Profile Index (KDPI) to minimize discard. Regional sharing can increase cold ischemia time (CIT) which may negatively impact transplant outcomes. Using a same donor mate kidney model, we aimed to define a CIT that should be targeted to optimize outcomes. Using Organ Procurement and Transplant Network/United Network for Organ Sharing database, we identified recipients of DDK from 2000 to 2013 with ≥85% KDPI. From this cohort, three groups of mate kidney recipients were identified based on CIT: group 1 (≥24 vs. ≥12 to <24 h), group 2 (≥24 vs. <12 h), and group 3 (≥12 to <24 vs. <12 h). Adjusted delayed graft function (DGF), and graft and patient survivals were compared for mate kidneys. DGF risk was significantly lower for patients with CIT <12 vs. ≥24 h in group 2 (adjusted OR: 0.25, 95% CI: 0.12-0.57, P < 0.001) while trending lower for CIT ≥12 to <24 vs. ≥24 h in group 1 (adjusted OR: 0.78, 95% CI: 0.59-1.03, P = 0.08) and CIT <12 vs. ≥12 to <24 h in group 3 (adjusted OR: 0.74, 95% CI: 0.55-1.0, P = 0.05). Adjusted graft and patient survivals were similar between mate kidneys in all groups. Minimizing CIT improves outcomes with regional sharing of marginal kidneys. © 2018 Steunstichting ESOT.
75 FR 54590 - Notice of 2010 National Organic Certification Cost-Share Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
...] Notice of 2010 National Organic Certification Cost-Share Program AGENCY: Agricultural Marketing Service... Certification Cost-Share Funds. The AMS has allocated $22.0 million for this organic certification cost-share... National Organic Certification Cost- Share Program is authorized under 7 U.S.C. 6523, as amended by section...
A Utility Maximizing and Privacy Preserving Approach for Protecting Kinship in Genomic Databases.
Kale, Gulce; Ayday, Erman; Tastan, Oznur
2017-09-12
Rapid and low cost sequencing of genomes enabled widespread use of genomic data in research studies and personalized customer applications, where genomic data is shared in public databases. Although the identities of the participants are anonymized in these databases, sensitive information about individuals can still be inferred. One such information is kinship. We define two routes kinship privacy can leak and propose a technique to protect kinship privacy against these risks while maximizing the utility of shared data. The method involves systematic identification of minimal portions of genomic data to mask as new participants are added to the database. Choosing the proper positions to hide is cast as an optimization problem in which the number of positions to mask is minimized subject to privacy constraints that ensure the familial relationships are not revealed.We evaluate the proposed technique on real genomic data. Results indicate that concurrent sharing of data pertaining to a parent and an offspring results in high risks of kinship privacy, whereas the sharing data from further relatives together is often safer. We also show arrival order of family members have a high impact on the level of privacy risks and on the utility of sharing data. Available at: https://github.com/tastanlab/Kinship-Privacy. erman@cs.bilkent.edu.tr or oznur.tastan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.
Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin
2017-05-15
The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.
NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data
Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug
2016-01-01
The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255
The Web-Database Connection Tools for Sharing Information on the Campus Intranet.
ERIC Educational Resources Information Center
Thibeault, Nancy E.
This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…
The National Map - Orthoimagery Layer
,
2007-01-01
Many Federal, State, and local agencies use a common set of framework geographic information databases as a tool for economic and community development, land and natural resource management, and health and safety services. Emergency management and homeland security applications rely on this information. Private industry, nongovernmental organizations, and individual citizens use the same geographic data. Geographic information underpins an increasingly large part of the Nation's economy. The U.S. Geological Survey (USGS) is developing The National Map to be a seamless, continually maintained, and nationally consistent set of online, public domain, framework geographic information databases. The National Map will serve as a foundation for integrating, sharing, and using data easily and consistently. The data will be the source of revised paper topographic maps. The National Map includes digital orthorectified imagery; elevation data; vector data for hydrography, transportation, boundary, and structure features; geographic names; and land cover information.
Internet calculations of thermodynamic properties of substances: Some problems and results
NASA Astrophysics Data System (ADS)
Ustyuzhanin, E. E.; Ochkov, V. F.; Shishakov, V. V.; Rykov, S. V.
2016-11-01
Internet resources (databases, web sites and others) on thermodynamic properties R = (p,T,s,...) of technologically important substances are analyzed. These databases put online by a number of organizations (the Joint Institute for High Temperatures of the Russian Academy of Sciences, Standartinform, the National Institute of Standards and Technology USA, the Institute for Thermal Physics of the Siberian Branch of the Russian Academy of Sciences, etc) are investigated. Software codes are elaborated in the work in forms of “client functions” those have such characteristics: (i) they are placed on a remote server, (ii) they serve as open interactive Internet resources. A client can use them for a calculation of R properties of substances. “Complex client functions” are considered. They are focused on sharing (i) software codes elaborated to design of power plants (PP) and (ii) client functions those can calculate R properties of working fluids for PP.
NASA Astrophysics Data System (ADS)
Wood, J. H.; Natali, S.
2014-12-01
The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.
Detection of alternative splice variants at the proteome level in Aspergillus flavus.
Chang, Kung-Yen; Georgianna, D Ryan; Heber, Steffen; Payne, Gary A; Muddiman, David C
2010-03-05
Identification of proteins from proteolytic peptides or intact proteins plays an essential role in proteomics. Researchers use search engines to match the acquired peptide sequences to the target proteins. However, search engines depend on protein databases to provide candidates for consideration. Alternative splicing (AS), the mechanism where the exon of pre-mRNAs can be spliced and rearranged to generate distinct mRNA and therefore protein variants, enable higher eukaryotic organisms, with only a limited number of genes, to have the requisite complexity and diversity at the proteome level. Multiple alternative isoforms from one gene often share common segments of sequences. However, many protein databases only include a limited number of isoforms to keep minimal redundancy. As a result, the database search might not identify a target protein even with high quality tandem MS data and accurate intact precursor ion mass. We computationally predicted an exhaustive list of putative isoforms of Aspergillus flavus proteins from 20 371 expressed sequence tags to investigate whether an alternative splicing protein database can assign a greater proportion of mass spectrometry data. The newly constructed AS database provided 9807 new alternatively spliced variants in addition to 12 832 previously annotated proteins. The searches of the existing tandem MS spectra data set using the AS database identified 29 new proteins encoded by 26 genes. Nine fungal genes appeared to have multiple protein isoforms. In addition to the discovery of splice variants, AS database also showed potential to improve genome annotation. In summary, the introduction of an alternative splicing database helps identify more proteins and unveils more information about a proteome.
NASA Astrophysics Data System (ADS)
Weltzin, J. F.; Browning, D. M.
2014-12-01
The USA National Phenology Network (USA-NPN; www.usanpn.org) is a national-scale science and monitoring initiative focused on phenology - the study of seasonal life-cycle events such as leafing, flowering, reproduction, and migration - as a tool to understand the response of biodiversity to environmental variation and change. USA-NPN provides a hierarchical, national monitoring framework that enables other organizations to leverage the capacity of the Network for their own applications - minimizing investment and duplication of effort - while promoting interoperability. Network participants can leverage: (1) Standardized monitoring protocols that have been broadly vetted, tested and published; (2) A centralized National Phenology Database (NPDb) for maintaining, archiving and replicating data, with standard metadata, terms-of-use, web-services, and documentation of QA/QC, plus tools for discovery, visualization and download of raw data and derived data products; and/or (3) A national in-situ, multi-taxa phenological monitoring system, Nature's Notebook, which enables participants to observe and record phenology of plants and animals - based on the protocols and information management system (IMS) described above - via either web or mobile applications. The protocols, NPDb and IMS, and Nature's Notebook represent a hierarchy of opportunities for involvement by a broad range of interested stakeholders, from individuals to agencies. For example, some organizations have adopted (e.g., the National Ecological Observatory Network or NEON) -- or are considering adopting (e.g., the Long-Term Agroecosystems Network or LTAR) -- the USA-NPN standardized protocols, but will develop their own database and IMS with web services to promote sharing of data with the NPDb. Other organizations (e.g., the Inventory and Monitoring Programs of the National Wildlife Refuge System and the National Park Service) have elected to use Nature's Notebook to support their phenological monitoring programs. We highlight the challenges and benefits of integrating phenology monitoring within existing and emerging national monitoring networks, and showcase opportunities that exist when standardized protocols are adopted and implemented to promote data interoperability and sharing.
76 FR 54999 - Notice of 2011 National Organic Certification Cost-Share Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
...] Notice of 2011 National Organic Certification Cost-Share Program AGENCY: Agricultural Marketing Service... for the National Organic Certification Cost- Share Program. SUMMARY: This Notice invites all States of...) for the allocation of National Organic Certification Cost-Share Funds. Beginning in Fiscal Year 2008...
Genic insights from integrated human proteomics in GeneCards.
Fishilevich, Simon; Zimmerman, Shahar; Kohn, Asher; Iny Stein, Tsippi; Olender, Tsviya; Kolker, Eugene; Safran, Marilyn; Lancet, Doron
2016-01-01
GeneCards is a one-stop shop for searchable human gene annotations (http://www.genecards.org/). Data are automatically mined from ∼120 sources and presented in an integrated web card for every human gene. We report the application of recent advances in proteomics to enhance gene annotation and classification in GeneCards. First, we constructed the Human Integrated Protein Expression Database (HIPED), a unified database of protein abundance in human tissues, based on the publically available mass spectrometry (MS)-based proteomics sources ProteomicsDB, Multi-Omics Profiling Expression Database, Protein Abundance Across Organisms and The MaxQuant DataBase. The integrated database, residing within GeneCards, compares favourably with its individual sources, covering nearly 90% of human protein-coding genes. For gene annotation and comparisons, we first defined a protein expression vector for each gene, based on normalized abundances in 69 normal human tissues. This vector is portrayed in the GeneCards expression section as a bar graph, allowing visual inspection and comparison. These data are juxtaposed with transcriptome bar graphs. Using the protein expression vectors, we further defined a pairwise metric that helps assess expression-based pairwise proximity. This new metric for finding functional partners complements eight others, including sharing of pathways, gene ontology (GO) terms and domains, implemented in the GeneCards Suite. In parallel, we calculated proteome-based differential expression, highlighting a subset of tissues that overexpress a gene and subserving gene classification. This textual annotation allows users of VarElect, the suite's next-generation phenotyper, to more effectively discover causative disease variants. Finally, we define the protein-RNA expression ratio and correlation as yet another attribute of every gene in each tissue, adding further annotative information. The results constitute a significant enhancement of several GeneCards sections and help promote and organize the genome-wide structural and functional knowledge of the human proteome. Database URL:http://www.genecards.org/. © The Author(s) 2016. Published by Oxford University Press.
1992-05-01
ocean color for retrieving ocean k(490) values are examined. The validation of the optical database from the satellite is accessed through comparison...for sharing results of this validation study. We wish to thank J. Mueller for helpful discussions in optics and satellite processing and for sharing his...of these data products are displayable as 512 x 512 8-bit image maps compatible with the PC-SeaPak image format. Valid data ranges are from 1 to 255
Jiang, Xiaoqian; Sarwate, Anand D.; Ohno-Machado, Lucila
2013-01-01
Objective Effective data sharing is critical for comparative effectiveness research (CER), but there are significant concerns about inappropriate disclosure of patient data. These concerns have spurred the development of new technologies for privacy preserving data sharing and data mining. Our goal is to review existing and emerging techniques that may be appropriate for data sharing related to CER. Material and methods We adapted a systematic review methodology to comprehensively search the research literature. We searched 7 databases and applied three stages of filtering based on titles, abstracts, and full text to identify those works most relevant to CER. Results Based on agreement and using the arbitrage of a third party expert, we selected 97 articles for meta-analysis. Our findings are organized along major types of data sharing in CER applications (i.e., institution-to-institution, institution-hosted, and public release). We made recommendations based on specific scenarios. Limitation We limited the scope of our study to methods that demonstrated practical impact, eliminating many theoretical studies of privacy that have been surveyed elsewhere. We further limited our study to data sharing for data tables, rather than complex genomic, set-valued, time series, text, image, or network data. Conclusion State-of-the-art privacy preserving technologies can guide the development of practical tools that will scale up the CER studies of the future. However, many challenges remain in this fast moving field in terms of practical evaluations as well as applications to a wider range of data types. PMID:23774511
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
...] Notice of Funds Availability: Agricultural Management Assistance Organic Certification Cost-Share Program... . SUPPLEMENTARY INFORMATION: This Organic Certification Cost-Share Program is part of the Agricultural Management... Wyoming. The AMS has allocated $1,352,850 for this organic certification cost- share program in Fiscal...
Panorama: A Targeted Proteomics Knowledge Base
2015-01-01
Panorama is a web application for storing, sharing, analyzing, and reusing targeted assays created and refined with Skyline,1 an increasingly popular Windows client software tool for targeted proteomics experiments. Panorama allows laboratories to store and organize curated results contained in Skyline documents with fine-grained permissions, which facilitates distributed collaboration and secure sharing of published and unpublished data via a web-browser interface. It is fully integrated with the Skyline workflow and supports publishing a document directly to a Panorama server from the Skyline user interface. Panorama captures the complete Skyline document information content in a relational database schema. Curated results published to Panorama can be aggregated and exported as chromatogram libraries. These libraries can be used in Skyline to pick optimal targets in new experiments and to validate peak identification of target peptides. Panorama is open-source and freely available. It is distributed as part of LabKey Server,2 an open source biomedical research data management system. Laboratories and organizations can set up Panorama locally by downloading and installing the software on their own servers. They can also request freely hosted projects on https://panoramaweb.org, a Panorama server maintained by the Department of Genome Sciences at the University of Washington. PMID:25102069
Resource Sharing: A Necessity for the '80s.
ERIC Educational Resources Information Center
Lavo, Barbara, Comp.
Papers presented at a 1981 seminar on library resource sharing covered topics related to Australasian databases, Australian and New Zealand document delivery systems, and shared acquisition and cataloging for special libraries. The papers included: (1) "AUSINET: Australasia's Information Network?" by Ian McCallum; (2) "Australia/New…
Friedland-Little, Joshua M; Gajarski, Robert J; Yu, Sunkyung; Donohue, Janet E; Zamberlan, Mary C; Schumacher, Kurt R
2014-09-01
Repeat heart transplantation (re-HTx) is standard practice in many pediatric centers. There are limited data available on outcomes of third HTx after failure of a second graft. We sought to compare outcomes of third HTx in pediatric and young adult patients with outcomes of second HTx in comparable recipients. All recipients of a third HTx in whom the primary HTx occurred before 21 years of age were identified in the United Network for Organ Sharing database (1985 to 2011) and matched 1:3 with a control group of second HTx patients by age, era and re-HTx indication. Outcomes including survival, rejection and cardiac allograft vasculopathy (CAV) were compared between groups. There was no difference between third HTx patients (n = 27) and control second HTx patients (n = 79) with respect to survival (76% vs 80% at 1 year, 62% vs 58% at 5 years and 53% vs 34% at 10 years, p = 0.75), early (<1 year from HTx) rejection (33.3% vs 44.3%, p = 0.32) or CAV (14.8% vs 30.4%, p = 0.11). Factors associated with non-survival in third HTx patients included mechanical ventilation at listing or HTx, extracorporeal membrane oxygenation support at listing or HTx, and elevated serum bilirubin at HTx. Outcomes among recipients of a third HTx are similar to those with a second HTx in matched patients, with no difference in short- or long-term survival and comparable rates of early rejection and CAV. Although the occurrence of a third HTx remains relatively rare in the USA, consideration of a third HTx appears reasonable in appropriately selected patients. Copyright © 2014 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.
Health information and communication system for emergency management in a developing country, Iran.
Seyedin, Seyed Hesam; Jamali, Hamid R
2011-08-01
Disasters are fortunately rare occurrences. However, accurate and timely information and communication are vital to adequately prepare individual health organizations for such events. The current article investigates the health related communication and information systems for emergency management in Iran. A mixed qualitative and quantitative methodology was used in this study. A sample of 230 health service managers was surveyed using a questionnaire and 65 semi-structured interviews were also conducted with public health and therapeutic affairs managers who were responsible for emergency management. A range of problems were identified including fragmentation of information, lack of local databases, lack of clear information strategy and lack of a formal system for logging disaster related information at regional or local level. Recommendations were made for improving the national emergency management information and communication system. The findings have implications for health organizations in developing and developed countries especially in the Middle East. Creating disaster related information databases, creating protocols and standards, setting an information strategy, training staff and hosting a center for information system in the Ministry of Health to centrally manage and share the data could improve the current information system.
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
NASA Astrophysics Data System (ADS)
Cornell, Sarah
2015-04-01
It is time to collate a global community database of atmospheric water-soluble organic nitrogen deposition. Organic nitrogen (ON) has long been known to be globally ubiquitous in atmospheric aerosol and precipitation, with implications for air and water quality, climate, biogeochemical cycles, ecosystems and human health. The number of studies of atmospheric ON deposition has increased steadily in recent years, but to date there is no accessible global dataset, for either bulk ON or its major components. Improved qualitative and quantitative understanding of the organic nitrogen component is needed to complement the well-established knowledge base pertaining to other components of atmospheric deposition (cf. Vet et al 2014). Without this basic information, we are increasingly constrained in addressing the current dynamics and potential interactions of atmospheric chemistry, climate and ecosystem change. To see the full picture we need global data synthesis, more targeted data gathering, and models that let us explore questions about the natural and anthropogenic dynamics of atmospheric ON. Collectively, our research community already has a substantial amount of atmospheric ON data. Published reports extend back over a century and now have near-global coverage. However, datasets available from the literature are very piecemeal and too often lack crucially important information that would enable aggregation or re-use. I am initiating an open collaborative process to construct a community database, so we can begin to systematically synthesize these datasets (generally from individual studies at a local and temporally limited scale) to increase their scientific usability and statistical power for studies of global change and anthropogenic perturbation. In drawing together our disparate knowledge, we must address various challenges and concerns, not least about the comparability of analysis and sampling methodologies, and the known complexity of composition of ON. We need to discuss and develop protocols that work for diverse research needs. The database will need to be harmonized or merged into existing global N data initiatives. This presentation therefore launches a standing invitation for experts to contribute and share rain and aerosol ON and chemical composition data, and jointly refine the preliminary database structure and metadata requirements for optimal mutual use. Reference: Vet et al. (2014) A global assessment of precipitation chemistry… Atmos Environ 93: 3-100
Youn, Bora; Soley-Bori, Marina; Soria-Saucedo, Rene; Ryan, Colleen M; Schneider, Jeffrey C; Haynes, Alex B; Cabral, Howard J; Kazis, Lewis E
2016-03-01
Readmission rates after operative procedures are used increasingly as a measure of hospital care quality. Patient access to care may influence readmission rates. The objective of this study was to determine the relationship between patient cost-sharing, insurance arrangements, and the risk of postoperative readmissions. Using the MarketScan Research Database (n = 121,002), we examined privately insured, nonelderly patients who underwent abdominal surgery in 2010. The main outcome measures were risk-adjusted unplanned readmissions within 7 days and 30 days of discharge. Odds of readmissions were compared with multivariable logistic regression models. In adjusted models, $1,284 increase in patient out-of-pocket payments during index admission (a difference of one standard deviation) was associated with 19% decrease in the odds of 7-day readmission (odds ratio [OR] 0.81, 95% confidence interval [CI] 0.78-0.85) and 17% decrease in the odds of 30-day readmission (OR 0.83, 95% CI 0.81-0.86). Patients in the noncapitated point-of-service plans (OR 1.19, 95% CI 1.07-1.33), preferred provider organization plans (OR 1.11, 95% CI 1.03-1.19), and high-deductible plans (OR 1.12, 95% CI 1.00-1.26) were more likely to be readmitted within 30 days compared with patients in the capitated health maintenance organization and point-of-service plans. Among privately insured, nonelderly patients, increased patient cost-sharing was associated with lower odds of 7-day and 30-day readmission after abdominal surgery. Insurance arrangements also were significantly associated with postoperative readmissions. Patient cost sharing and insurance arrangements need consideration in the provision of equitable access for quality care. Copyright © 2016 Elsevier Inc. All rights reserved.
Hofmarcher, M M
1998-09-01
To provide a conceptual framework for health planning activities in the "middle income" transition countries. Economic, demographic, and disease-related data in Central and Eastern European (CEE) countries, including Croatia and Austria, were compared to the Europen Union (EU) average. Data were selected from the databases provided by the World Health Organization, Organization for Economic Cooperation and Development, World Bank, United Nations, and the European Bank of Reconstruction and Development. Life expectancy and mortality were extrapolated until the year 2000 by using an exponential growth model for the WHO time series data, starting in 1994. Death rates due to ischemic heart diseases (18%) and cerebrovascular diseases (13%) were selected to show frequent causes of death. Relative to the EU average, the gross domestic product (GDP) share of health expenditures in transition countries was disproportionate to wealth and premature death. The population in CEE-countries was younger and the share of people aged >65 was predicted to remain about 15% below the EU average and Austria. For Croatia, the share of people aged 65 would be on the increase, similar to the share predicted for Austria (slightly above the EU average). Mortality of selected non-communicable, chronic diseases is predicted to increase and remain relatively high. Mortality rates due to infectious diseases have been declining but remained comparatively on a high level. Coexistence of demographic and epidemiological transition along with high mortality rates due to infectious diseases creates a "double burden". Economic transition has the potential to comprise both the increase in wealth, and life and health expectancy.
CottonGen: a genomics, genetics and breeding database for cotton research
USDA-ARS?s Scientific Manuscript database
CottonGen (http://www.cottongen.org) is a curated and integrated web-based relational database providing access to publicly available genomic, genetic and breeding data for cotton. CottonGen supercedes CottonDB and the Cotton Marker Database, with enhanced tools for easier data sharing, mining, vis...
Exploration of options for publishing databases and supplemental material in society journals
USDA-ARS?s Scientific Manuscript database
As scientific information becomes increasingly more abundant, there is increasing interest among members of our societies to share databases. These databases have great value, for example, in providing long-term perspectives of various scientific problems and for use by modelers to extend the inform...
Enhancing Knowledge Integration: An Information System Capstone Project
ERIC Educational Resources Information Center
Steiger, David M.
2009-01-01
This database project focuses on learning through knowledge integration; i.e., sharing and applying specialized (database) knowledge within a group, and combining it with other business knowledge to create new knowledge. Specifically, the Tiny Tots, Inc. project described below requires students to design, build, and instantiate a database system…
Wiley, Emily A.; Stover, Nicholas A.
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511
Wiley, Emily A; Stover, Nicholas A
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.
NASA Technical Reports Server (NTRS)
Carvalho, Robert F.; Williams, James; Keller, Richard; Sturken, Ian; Panontin, Tina
2004-01-01
InvestigationOrganizer (IO) is a collaborative web-based system designed to support the conduct of mishap investigations. IO provides a common repository for a wide range of mishap related information, and allows investigators to make explicit, shared, and meaningful links between evidence, causal models, findings and recommendations. It integrates the functionality of a database, a common document repository, a semantic knowledge network, a rule-based inference engine, and causal modeling and visualization. Thus far, IO has been used to support four mishap investigations within NASA, ranging from a small property damage case to the loss of the Space Shuttle Columbia. This paper describes how the functionality of IO supports mishap investigations and the lessons learned from the experience of supporting two of the NASA mishap investigations: the Columbia Accident Investigation and the CONTOUR Loss Investigation.
Value of shared preclinical safety studies - The eTOX database.
Briggs, Katharine; Barber, Chris; Cases, Montserrat; Marc, Philippe; Steger-Hartmann, Thomas
2015-01-01
A first analysis of a database of shared preclinical safety data for 1214 small molecule drugs and drug candidates extracted from 3970 reports donated by thirteen pharmaceutical companies for the eTOX project (www.etoxproject.eu) is presented. Species, duration of exposure and administration route data were analysed to assess if large enough subsets of homogenous data are available for building in silico predictive models. Prevalence of treatment related effects for the different types of findings recorded were analysed. The eTOX ontology was used to determine the most common treatment-related clinical chemistry and histopathology findings reported in the database. The data were then mined to evaluate sensitivity of established in vivo biomarkers for liver toxicity risk assessment. The value of the database to inform other drug development projects during early drug development is illustrated by a case study.
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
Canadian ENGOs in governance of water resources: information needs and monitoring practices.
Kebo, Sasha; Bunch, Martin J
2013-11-01
Water quality monitoring involves a complex set of steps and a variety of approaches. Its goals include understanding of aquatic habitats, informing management and facilitating decision making, and educating citizens. Environmental nongovernmental organizations (ENGOs) are increasingly engaged in water quality monitoring and act as environmental watchdogs and stewards of water resources. These organizations exhibit different monitoring mandates. As government involvement in water quality monitoring continues to decline, it becomes essential that we understand their modi operandi. By doing so, we can enhance efficacy and encourage data sharing and communication. This research examined Canadian ENGOs that collect their own data on water quality with respect to water quality monitoring activities and information needs. This work had a twofold purpose: (1) to enhance knowledge about the Canadian ENGOs operating in the realm of water quality monitoring and (2) to guide and inform development of web-based geographic information systems (GIS) to support water quality monitoring, particularly using benthic macroinvertebrate protocols. A structured telephone survey was administered across 10 Canadian provinces to 21 ENGOs that undertake water quality monitoring. This generated information about barriers and challenges of data sharing, commonly collected metrics, human resources, and perceptions of volunteer-collected data. Results are presented on an aggregate level and among different groups of respondents. Use of geomatics technology was not consistent among respondents, and we found no noteworthy differences between organizations that did and did not use GIS tools. About one third of respondents did not employ computerized systems (including databases and spreadsheets) to support data management, analysis, and sharing. Despite their advantage as a holistic water quality indicator, benthic macroinvertebrates (BMIs) were not widely employed in stream monitoring. Although BMIs are particularly suitable for the purpose of citizen education, few organizations collected this metric, despite having public education and awareness as part of their mandate.
Shapiro, Johanna
2016-09-01
This article explores how medical anthropologist Howard Stein's poetry and his unique practice of sharing this poetry with the patients, physicians, and administrators who inspired it create ways of knowing that are at once revelatory and emancipatory. Stein's writing shows readers that poetry can be considered as a form of data and as a method of investigation into the processes of the human soul. Furthermore, it represents a kind of intervention that invites health professional readers toward connection, bridge building, and solidarity with their patients and with one another. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Liver transplant for cholestatic liver diseases.
Carrion, Andres F; Bhamidimarri, Kalyan Ram
2013-05-01
Cholestatic liver diseases include a group of diverse disorders with different epidemiology, pathophysiology, clinical course, and prognosis. Despite significant advances in the clinical care of patients with cholestatic liver diseases, liver transplant (LT) remains the only definitive therapy for end-stage liver disease, regardless of the underlying cause. As per the United Network for Organ Sharing database, the rate of cadaveric LT for cholestatic liver disease was 18% in 1991, 10% in 2000, and 7.8% in 2008. This review summarizes the available evidence on various common and rare cholestatic liver diseases, disease-specific issues, and pertinent aspects of LT. Copyright © 2013 Elsevier Inc. All rights reserved.
[A basic research to share Fourier transform near-infrared spectrum information resource].
Zhang, Lu-Da; Li, Jun-Hui; Zhao, Long-Lian; Zhao, Li-Li; Qin, Fang-Li; Yan, Yan-Lu
2004-08-01
A method to share the information resource in the database of Fourier transform near-infrared(FTNIR) spectrum information of agricultural products and utilize the spectrum information sufficiently is explored in this paper. Mapping spectrum information from one instrument to another is studied to express the spectrum information accurately between the instruments. Then mapping spectrum information is used to establish a mathematical model of quantitative analysis without including standard samples. The analysis result is that the relative coefficient r is 0.941 and the relative error is 3.28% between the model estimate values and the Kjeldahl's value for the protein content of twenty-two wheat samples, while the relative coefficient r is 0.963 and the relative error is 2.4% for the other model, which is established by using standard samples. It is shown that the spectrum information can be shared by using the mapping spectrum information. So it can be concluded that the spectrum information in one FTNIR spectrum information database can be transformed to another instrument's mapping spectrum information, which makes full use of the information resource in the database of FTNIR spectrum information to realize the resource sharing between different instruments.
Infrastructure resources for clinical research in amyotrophic lateral sclerosis.
Sherman, Alexander V; Gubitz, Amelie K; Al-Chalabi, Ammar; Bedlack, Richard; Berry, James; Conwit, Robin; Harris, Brent T; Horton, D Kevin; Kaufmann, Petra; Leitner, Melanie L; Miller, Robert; Shefner, Jeremy; Vonsattel, Jean Paul; Mitsumoto, Hiroshi
2013-05-01
Clinical trial networks, shared clinical databases, and human biospecimen repositories are examples of infrastructure resources aimed at enhancing and expediting clinical and/or patient oriented research to uncover the etiology and pathogenesis of amyotrophic lateral sclerosis (ALS), a rapidly progressive neurodegenerative disease that leads to the paralysis of voluntary muscles. The current status of such infrastructure resources, as well as opportunities and impediments, were discussed at the second Tarrytown ALS meeting held in September 2011. The discussion focused on resources developed and maintained by ALS clinics and centers in North America and Europe, various clinical trial networks, U.S. government federal agencies including the National Institutes of Health (NIH), the Agency for Toxic Substances and Disease Registry (ATSDR) and the Centers for Disease Control and Prevention (CDC), and several voluntary disease organizations that support ALS research activities. Key recommendations included 1) the establishment of shared databases among individual ALS clinics to enhance the coordination of resources and data analyses; 2) the expansion of quality-controlled human biospecimen banks; and 3) the adoption of uniform data standards, such as the recently developed Common Data Elements (CDEs) for ALS clinical research. The value of clinical trial networks such as the Northeast ALS (NEALS) Consortium and the Western ALS (WALS) Consortium was recognized, and strategies to further enhance and complement these networks and their research resources were discussed.
Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences
2018-01-01
Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424
Baldwin, Thomas T; Basenko, Evelina; Harb, Omar; Brown, Neil A; Urban, Martin; Hammond-Kosack, Kim E; Bregitzer, Phil P
2018-06-01
There is no comprehensive storage for generated mutants of Fusarium graminearum or data associated with these mutants. Instead, researchers relied on several independent and non-integrated databases. FgMutantDb was designed as a simple spreadsheet that is accessible globally on the web that will function as a centralized source of information on F. graminearum mutants. FgMutantDb aids in the maintenance and sharing of mutants within a research community. It will serve also as a platform for disseminating prepublication results as well as negative results that often go unreported. Additionally, the highly curated information on mutants in FgMutantDb will be shared with other databases (FungiDB, Ensembl, PhytoPath, and PHI-base) through updating reports. Here we describe the creation and potential usefulness of FgMutantDb to the F. graminearum research community, and provide a tutorial on its use. This type of database could be easily emulated for other fungal species. Published by Elsevier Inc.
Seabird databases and the new paradigm for scientific publication and attribution
Hatch, Scott A.
2010-01-01
For more than 300 years, the peer-reviewed journal article has been the principal medium for packaging and delivering scientific data. With new tools for managing digital data, a new paradigm is emerging—one that demands open and direct access to data and that enables and rewards a broad-based approach to scientific questions. Ground-breaking papers in the future will increasingly be those that creatively mine and synthesize vast stores of data available on the Internet. This is especially true for conservation science, in which essential data can be readily captured in standard record formats. For seabird professionals, a number of globally shared databases are in the offing, or should be. These databases will capture the salient results of inventories and monitoring, pelagic surveys, diet studies, and telemetry. A number of real or perceived barriers to data sharing exist, but none is insurmountable. Our discipline should take an important stride now by adopting a specially designed markup language for annotating and sharing seabird data.
ERIC Educational Resources Information Center
Friedman, Debra; Hoffman, Phillip
2001-01-01
Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…
ERIC Educational Resources Information Center
Mason, Robert T.
2013-01-01
This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…
Code of Federal Regulations, 2014 CFR
2014-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Database Security: What Students Need to Know
ERIC Educational Resources Information Center
Murray, Meg Coffin
2010-01-01
Database security is a growing concern evidenced by an increase in the number of reported incidents of loss of or unauthorized exposure to sensitive data. As the amount of data collected, retained and shared electronically expands, so does the need to understand database security. The Defense Information Systems Agency of the US Department of…
Code of Federal Regulations, 2013 CFR
2013-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations
Kim, Seung Won; Kim, Bae-Hwan
2016-01-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094
A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.
Kim, Seung Won; Kim, Bae-Hwan
2016-07-01
Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.
A Game Theoretic Framework for Analyzing Re-Identification Risk
Wan, Zhiyu; Vorobeychik, Yevgeniy; Xia, Weiyi; Clayton, Ellen Wright; Kantarcioglu, Murat; Ganta, Ranjit; Heatherly, Raymond; Malin, Bradley A.
2015-01-01
Given the potential wealth of insights in personal data the big databases can provide, many organizations aim to share data while protecting privacy by sharing de-identified data, but are concerned because various demonstrations show such data can be re-identified. Yet these investigations focus on how attacks can be perpetrated, not the likelihood they will be realized. This paper introduces a game theoretic framework that enables a publisher to balance re-identification risk with the value of sharing data, leveraging a natural assumption that a recipient only attempts re-identification if its potential gains outweigh the costs. We apply the framework to a real case study, where the value of the data to the publisher is the actual grant funding dollar amounts from a national sponsor and the re-identification gain of the recipient is the fine paid to a regulator for violation of federal privacy rules. There are three notable findings: 1) it is possible to achieve zero risk, in that the recipient never gains from re-identification, while sharing almost as much data as the optimal solution that allows for a small amount of risk; 2) the zero-risk solution enables sharing much more data than a commonly invoked de-identification policy of the U.S. Health Insurance Portability and Accountability Act (HIPAA); and 3) a sensitivity analysis demonstrates these findings are robust to order-of-magnitude changes in player losses and gains. In combination, these findings provide support that such a framework can enable pragmatic policy decisions about de-identified data sharing. PMID:25807380
Global land information system (GLIS) access to worldwide Landsat data
Smith, Timothy B.; Goodale, Katherine L.
1993-01-01
The Landsat Technical Working Group (LTWG) and the Landsat Ground Station Operations Working Group (LGSOWG) have encouraged Landsat receiving stations around the world to share information about their data holdings through the exchange of metadata records. Receiving stations forward their metadata records to the U.S. Geological Survey's EROS Data Center (EDC) on a quarterly basis. The EDC maintains the records for each station, coordinates changes to the database, and provides metadata to the stations as requested. The result is a comprehensive international database listing most of the world's Landsat data acquisitions This exchange of information began in the early 1980's with the inclusion in the EDC database os scenes acquired by a receiving station in Italy. Through the years other stations have agreed to participate; currently ten of the seventeen stations actively share their metadata records. Coverage maps have been generated to depict the status of the database. The Worldwide Landsat database is also available though the Global Land Information System (GLIS).
Software for Sharing and Management of Information
NASA Technical Reports Server (NTRS)
Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.
2003-01-01
DIAMS is a set of computer programs that implements a system of collaborative agents that serve multiple, geographically distributed users communicating via the Internet. DIAMS provides a user interface as a Java applet that runs on each user s computer and that works within the context of the user s Internet-browser software. DIAMS helps all its users to manage, gain access to, share, and exchange information in databases that they maintain on their computers. One of the DIAMS agents is a personal agent that helps its owner find information most relevant to current needs. It provides software tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Capabilities for generating flexible hierarchical displays are integrated with capabilities for indexed- query searching to support effective access to information. Automatic indexing methods are employed to support users queries and communication between agents. The catalog of a repository is kept in object-oriented storage to facilitate sharing of information. Collaboration between users is aided by matchmaker agents and by automated exchange of information. The matchmaker agents are designed to establish connections between users who have similar interests and expertise.
de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo
2010-02-19
Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Drugs@FDA: FDA Approved Drug Products
... Cosmetics Tobacco Products Home Drug Databases Drugs@FDA Drugs@FDA: FDA Approved Drug Products Share Tweet Linkedin Pin it More sharing ... Download Drugs@FDA Express for free Search by Drug Name, Active Ingredient, or Application Number Enter at ...
Physical Samples Linked Data in Action
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.
2017-12-01
Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with person, sample, organization, cruise, etc. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).
Evidence in the learning organization
Crites, Gerald E; McNamara, Megan C; Akl, Elie A; Richardson, W Scott; Umscheid, Craig A; Nishikawa, James
2009-01-01
Background Organizational leaders in business and medicine have been experiencing a similar dilemma: how to ensure that their organizational members are adopting work innovations in a timely fashion. Organizational leaders in healthcare have attempted to resolve this dilemma by offering specific solutions, such as evidence-based medicine (EBM), but organizations are still not systematically adopting evidence-based practice innovations as rapidly as expected by policy-makers (the knowing-doing gap problem). Some business leaders have adopted a systems-based perspective, called the learning organization (LO), to address a similar dilemma. Three years ago, the Society of General Internal Medicine's Evidence-based Medicine Task Force began an inquiry to integrate the EBM and LO concepts into one model to address the knowing-doing gap problem. Methods During the model development process, the authors searched several databases for relevant LO frameworks and their related concepts by using a broad search strategy. To identify the key LO frameworks and consolidate them into one model, the authors used consensus-based decision-making and a narrative thematic synthesis guided by several qualitative criteria. The authors subjected the model to external, independent review and improved upon its design with this feedback. Results The authors found seven LO frameworks particularly relevant to evidence-based practice innovations in organizations. The authors describe their interpretations of these frameworks for healthcare organizations, the process they used to integrate the LO frameworks with EBM principles, and the resulting Evidence in the Learning Organization (ELO) model. They also provide a health organization scenario to illustrate ELO concepts in application. Conclusion The authors intend, by sharing the LO frameworks and the ELO model, to help organizations identify their capacities to learn and share knowledge about evidence-based practice innovations. The ELO model will need further validation and improvement through its use in organizational settings and applied health services research. PMID:19323819
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
The Matchmaker Exchange: a platform for rare disease gene discovery.
Philippakis, Anthony A; Azzariti, Danielle R; Beltran, Sergi; Brookes, Anthony J; Brownstein, Catherine A; Brudno, Michael; Brunner, Han G; Buske, Orion J; Carey, Knox; Doll, Cassie; Dumitriu, Sergiu; Dyke, Stephanie O M; den Dunnen, Johan T; Firth, Helen V; Gibbs, Richard A; Girdea, Marta; Gonzalez, Michael; Haendel, Melissa A; Hamosh, Ada; Holm, Ingrid A; Huang, Lijia; Hurles, Matthew E; Hutton, Ben; Krier, Joel B; Misyura, Andriy; Mungall, Christopher J; Paschall, Justin; Paten, Benedict; Robinson, Peter N; Schiettecatte, François; Sobreira, Nara L; Swaminathan, Ganesh J; Taschner, Peter E; Terry, Sharon F; Washington, Nicole L; Züchner, Stephan; Boycott, Kym M; Rehm, Heidi L
2015-10-01
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be a reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. Three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow. © 2015 WILEY PERIODICALS, INC.
The Global Genome Biodiversity Network (GGBN) Data Standard specification
Droege, G.; Barker, K.; Seberg, O.; Coddington, J.; Benson, E.; Berendsohn, W. G.; Bunk, B.; Butler, C.; Cawsey, E. M.; Deck, J.; Döring, M.; Flemons, P.; Gemeinholzer, B.; Güntsch, A.; Hollowell, T.; Kelbert, P.; Kostadinov, I.; Kottmann, R.; Lawlor, R. T.; Lyal, C.; Mackenzie-Dodds, J.; Meyer, C.; Mulcahy, D.; Nussbeck, S. Y.; O'Tuama, É.; Orrell, T.; Petersen, G.; Robertson, T.; Söhngen, C.; Whitacre, J.; Wieczorek, J.; Yilmaz, P.; Zetzsche, H.; Zhang, Y.; Zhou, X.
2016-01-01
Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today’s ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard. Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard PMID:27694206
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi; Brookes, Anthony J.; Brownstein, Catherine A.; Brudno, Michael; Brunner, Han G.; Buske, Orion J.; Carey, Knox; Doll, Cassie; Dumitriu, Sergiu; Dyke, Stephanie O.M.; den Dunnen, Johan T.; Firth, Helen V.; Gibbs, Richard A.; Girdea, Marta; Gonzalez, Michael; Haendel, Melissa A.; Hamosh, Ada; Holm, Ingrid A.; Huang, Lijia; Hurles, Matthew E.; Hutton, Ben; Krier, Joel B.; Misyura, Andriy; Mungall, Christopher J.; Paschall, Justin; Paten, Benedict; Robinson, Peter N.; Schiettecatte, François; Sobreira, Nara L.; Swaminathan, Ganesh J.; Taschner, Peter E.; Terry, Sharon F.; Washington, Nicole L.; Züchner, Stephan; Boycott, Kym M.; Rehm, Heidi L.
2015-01-01
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for “the needle in a haystack” to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can “match” these cases to build evidence for causality. However, serendipity has never proven to be a reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. Three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow. PMID:26295439
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi; ...
2015-09-17
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be amore » reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. In conclusion, three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow.« less
Management of information in distributed biomedical collaboratories.
Keator, David B
2009-01-01
Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an alarming rate. The specialization of fields in biology and medicine demonstrates the need for somewhat different structures for storage and retrieval of data. For biologists, the need for structured information and integration across a number of domains drives development. For clinical researchers and hospitals, the need for a structured medical record accessible to, ideally, any medical practitioner who might require it during the course of research or patient treatment, patient confidentiality, and security are the driving developmental factors. Scientific data management systems generally consist of a few core services: a backend database system, a front-end graphical user interface, and an export/import mechanism or data interchange format to both get data into and out of the database and share data with collaborators. The chapter introduces some existing databases, distributed file systems, and interchange languages used within the biomedical research and clinical communities for scientific data management and exchange.
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be amore » reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. In conclusion, three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow.« less
NASA Astrophysics Data System (ADS)
Satheendran, S.; John, C. M.; Fasalul, F. K.; Aanisa, K. M.
2014-11-01
Web geoservices is the obvious graduation of Geographic Information System in a distributed environment through a simple browser. It enables organizations to share domain-specific rich and dynamic spatial information over the web. The present study attempted to design and develop a web enabled GIS application for the School of Environmental Sciences, Mahatma Gandhi University, Kottayam, Kerala, India to publish various geographical databases to the public through its website. The development of this project is based upon the open source tools and techniques. The output portal site is platform independent. The premier webgis frame work `Geomoose' is utilized. Apache server is used as the Web Server and the UMN Map Server is used as the map server for this project. It provides various customised tools to query the geographical database in different ways and search for various facilities in the geographical area like banks, attractive places, hospitals, hotels etc. The portal site was tested with the output geographical database of 2 projects of the School such as 1) the Tourism Information System for the Malabar region of Kerala State consisting of 5 northern districts 2) the geoenvironmental appraisal of the Athirappilly Hydroelectric Project covering the entire Chalakkudy river basin.
DroSpeGe: rapid access database for new Drosophila species genomes.
Gilbert, Donald G
2007-01-01
The Drosophila species comparative genome database DroSpeGe (http://insects.eugenes.org/DroSpeGe/) provides genome researchers with rapid, usable access to 12 new and old Drosophila genomes, since its inception in 2004. Scientists can use, with minimal computing expertise, the wealth of new genome information for developing new insights into insect evolution. New genome assemblies provided by several sequencing centers have been annotated with known model organism gene homologies and gene predictions to provided basic comparative data. TeraGrid supplies the shared cyberinfrastructure for the primary computations. This genome database includes homologies to Drosophila melanogaster and eight other eukaryote model genomes, and gene predictions from several groups. BLAST searches of the newest assemblies are integrated with genome maps. GBrowse maps provide detailed views of cross-species aligned genomes. BioMart provides for data mining of annotations and sequences. Common chromosome maps identify major synteny among species. Potential gain and loss of genes is suggested by Gene Ontology groupings for genes of the new species. Summaries of essential genome statistics include sizes, genes found and predicted, homology among genomes, phylogenetic trees of species and comparisons of several gene predictions for sensitivity and specificity in finding new and known genes.
NASA Astrophysics Data System (ADS)
Morgado, A.; Sánchez-Lavega, A.; Rojas, J. F.; Hueso, R.
2005-08-01
The collaboration between amateurs astronomers and the professional community has been fruitful on many areas of astronomy. The development of the Internet has allowed a better than ever capability of sharing information worldwide and access to other observers data. For many years now the International Jupiter Watch (IJW) Atmospheric discipline has coordinated observational efforts for long-term studies of the atmosphere of Jupiter. The International Outer Planets Watch (IOPW) has extended its labours to the four Outer Planets. Here we present the Planetary Virtual Observatory & Laboratory (PVOL), a website database where we integer IJW and IOPW images. At PVOL observers can submit their data and professionals can search for images under a wide variety of useful criteria such as date and time, filters used, observer, or central meridian longitude. PVOL is aimed to grow as an organized easy to use database of amateur images of the Outer Planets. The PVOL web address is located at http://www.pvol.ehu.es/ and coexists with the traditional IOPW site: http://www.ehu.es/iopw/ Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.
SmedGD 2.0: The Schmidtea mediterranea genome database
Robb, Sofia M.C.; Gotting, Kirsten; Ross, Eric; Sánchez Alvarado, Alejandro
2016-01-01
Planarians have emerged as excellent models for the study of key biological processes such as stem cell function and regulation, axial polarity specification, regeneration, and tissue homeostasis among others. The most widely used organism for these studies is the free-living flatworm Schmidtea mediterranea. In 2007, the Schmidtea mediterranea Genome Database (SmedGD) was first released to provide a much needed resource for the small, but growing planarian community. SmedGD 1.0 has been a depository for genome sequence, a draft assembly, and related experimental data (e.g., RNAi phenotypes, in situ hybridization images, and differential gene expression results). We report here a comprehensive update to SmedGD (SmedGD 2.0) that aims to expand its role as an interactive community resource. The new database includes more recent, and up-to-date transcription data, provides tools that enhance interconnectivity between different genome assemblies and transcriptomes, including next generation assemblies for both the sexual and asexual biotypes of S. mediterranea. SmedGD 2.0 (http://smedgd.stowers.org) not only provides significantly improved gene annotations, but also tools for data sharing, attributes that will help both the planarian and biomedical communities to more efficiently mine the genomics and transcriptomics of S. mediterranea. PMID:26138588
Dataworks for GNSS: Software for Supporting Data Sharing and Federation of Geodetic Networks
NASA Astrophysics Data System (ADS)
Boler, F. M.; Meertens, C. M.; Miller, M. M.; Wier, S.; Rost, M.; Matykiewicz, J.
2015-12-01
Continuously-operating Global Navigation Satellite System (GNSS) networks are increasingly being installed globally for a wide variety of science and societal applications. GNSS enables Earth science research in areas including tectonic plate interactions, crustal deformation in response to loading by tectonics, magmatism, water and ice, and the dynamics of water - and thereby energy transfer - in the atmosphere at regional scale. The many individual scientists and organizations that set up GNSS stations globally are often open to sharing data, but lack the resources or expertise to deploy systems and software to manage and curate data and metadata and provide user tools that would support data sharing. UNAVCO previously gained experience in facilitating data sharing through the NASA-supported development of the Geodesy Seamless Archive Centers (GSAC) open source software. GSAC provides web interfaces and simple web services for data and metadata discovery and access, supports federation of multiple data centers, and simplifies transfer of data and metadata to long-term archives. The NSF supported the dissemination of GSAC to multiple European data centers forming the European Plate Observing System. To expand upon GSAC to provide end-to-end, instrument-to-distribution capability, UNAVCO developed Dataworks for GNSS with NSF funding to the COCONet project, and deployed this software on systems that are now operating as Regional GNSS Data Centers as part of the NSF-funded TLALOCNet and COCONet projects. Dataworks consists of software modules written in Python and Java for data acquisition, management and sharing. There are modules for GNSS receiver control and data download, a database schema for metadata, tools for metadata handling, ingest software to manage file metadata, data file management scripts, GSAC, scripts for mirroring station data and metadata from partner GSACs, and extensive software and operator documentation. UNAVCO plans to provide a cloud VM image of Dataworks that would allow standing up a Dataworks-enabled GNSS data center without requiring upfront investment in server hardware. By enabling data creators to organize their data and metadata for sharing, Dataworks helps scientists expand their data curation awareness and responsibility, and enhances data access for all.
Agrafiotis, Dimitris K; Alex, Simson; Dai, Heng; Derkinderen, An; Farnum, Michael; Gates, Peter; Izrailev, Sergei; Jaeger, Edward P; Konstant, Paul; Leung, Albert; Lobanov, Victor S; Marichal, Patrick; Martin, Douglas; Rassokhin, Dmitrii N; Shemanarev, Maxim; Skalkin, Andrew; Stong, John; Tabruyn, Tom; Vermeiren, Marleen; Wan, Jackson; Xu, Xiang Yang; Yao, Xiang
2007-01-01
We present ABCD, an integrated drug discovery informatics platform developed at Johnson & Johnson Pharmaceutical Research & Development, L.L.C. ABCD is an attempt to bridge multiple continents, data systems, and cultures using modern information technology and to provide scientists with tools that allow them to analyze multifactorial SAR and make informed, data-driven decisions. The system consists of three major components: (1) a data warehouse, which combines data from multiple chemical and pharmacological transactional databases, designed for supreme query performance; (2) a state-of-the-art application suite, which facilitates data upload, retrieval, mining, and reporting, and (3) a workspace, which facilitates collaboration and data sharing by allowing users to share queries, templates, results, and reports across project teams, campuses, and other organizational units. Chemical intelligence, performance, and analytical sophistication lie at the heart of the new system, which was developed entirely in-house. ABCD is used routinely by more than 1000 scientists around the world and is rapidly expanding into other functional areas within the J&J organization.
cMapper: gene-centric connectivity mapper for EBI-RDF platform.
Shoaib, Muhammad; Ansari, Adnan Ahmad; Ahn, Sung-Min
2017-01-15
In this era of biological big data, data integration has become a common task and a challenge for biologists. The Resource Description Framework (RDF) was developed to enable interoperability of heterogeneous datasets. The EBI-RDF platform enables an efficient data integration of six independent biological databases using RDF technologies and shared ontologies. However, to take advantage of this platform, biologists need to be familiar with RDF technologies and SPARQL query language. To overcome this practical limitation of the EBI-RDF platform, we developed cMapper, a web-based tool that enables biologists to search the EBI-RDF databases in a gene-centric manner without a thorough knowledge of RDF and SPARQL. cMapper allows biologists to search data entities in the EBI-RDF platform that are connected to genes or small molecules of interest in multiple biological contexts. The input to cMapper consists of a set of genes or small molecules, and the output are data entities in six independent EBI-RDF databases connected with the given genes or small molecules in the user's query. cMapper provides output to users in the form of a graph in which nodes represent data entities and the edges represent connections between data entities and inputted set of genes or small molecules. Furthermore, users can apply filters based on database, taxonomy, organ and pathways in order to focus on a core connectivity graph of their interest. Data entities from multiple databases are differentiated based on background colors. cMapper also enables users to investigate shared connections between genes or small molecules of interest. Users can view the output graph on a web browser or download it in either GraphML or JSON formats. cMapper is available as a web application with an integrated MySQL database. The web application was developed using Java and deployed on Tomcat server. We developed the user interface using HTML5, JQuery and the Cytoscape Graph API. cMapper can be accessed at http://cmapper.ewostech.net Readers can download the development manual from the website http://cmapper.ewostech.net/docs/cMapperDocumentation.pdf. Source Code is available at https://github.com/muhammadshoaib/cmapperContact:smahn@gachon.ac.krSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Organizing Diverse, Distributed Project Information
NASA Technical Reports Server (NTRS)
Keller, Richard M.
2003-01-01
SemanticOrganizer is a software application designed to organize and integrate information generated within a distributed organization or as part of a project that involves multiple, geographically dispersed collaborators. SemanticOrganizer incorporates the capabilities of database storage, document sharing, hypermedia navigation, and semantic-interlinking into a system that can be customized to satisfy the specific information-management needs of different user communities. The program provides a centralized repository of information that is both secure and accessible to project collaborators via the World Wide Web. SemanticOrganizer's repository can be used to collect diverse information (including forms, documents, notes, data, spreadsheets, images, and sounds) from computers at collaborators work sites. The program organizes the information using a unique network-structured conceptual framework, wherein each node represents a data record that contains not only the original information but also metadata (in effect, standardized data that characterize the information). Links among nodes express semantic relationships among the data records. The program features a Web interface through which users enter, interlink, and/or search for information in the repository. By use of this repository, the collaborators have immediate access to the most recent project information, as well as to archived information. A key advantage to SemanticOrganizer is its ability to interlink information together in a natural fashion using customized terminology and concepts that are familiar to a user community.
Issues in Big-Data Database Systems
2014-06-01
Post, 18 August 2013. Berman, Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier... Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier. 261pp. Characterization of
Fine-grained policy control in U.S. Army Research Laboratory (ARL) multimodal signatures database
NASA Astrophysics Data System (ADS)
Bennett, Kelly; Grueneberg, Keith; Wood, David; Calo, Seraphin
2014-06-01
The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) consists of a number of colocated relational databases representing a collection of data from various sensors. Role-based access to this data is granted to external organizations such as DoD contractors and other government agencies through a client Web portal. In the current MMSDB system, access control is only at the database and firewall level. In order to offer finer grained security, changes to existing user profile schemas and authentication mechanisms are usually needed. In this paper, we describe a software middleware architecture and implementation that allows fine-grained access control to the MMSDB at a dataset, table, and row level. Result sets from MMSDB queries issued in the client portal are filtered with the use of a policy enforcement proxy, with minimal changes to the existing client software and database. Before resulting data is returned to the client, policies are evaluated to determine if the user or role is authorized to access the data. Policies can be authored to filter data at the row, table or column level of a result set. The system uses various technologies developed in the International Technology Alliance in Network and Information Science (ITA) for policy-controlled information sharing and dissemination1. Use of the Policy Management Library provides a mechanism for the management and evaluation of policies to support finer grained access to the data in the MMSDB system. The GaianDB is a policy-enabled, federated database that acts as a proxy between the client application and the MMSDB system.
Ambiguity of non-systematic chemical identifiers within and between small-molecule databases.
Akhondi, Saber A; Muresan, Sorel; Williams, Antony J; Kors, Jan A
2015-01-01
A wide range of chemical compound databases are currently available for pharmaceutical research. To retrieve compound information, including structures, researchers can query these chemical databases using non-systematic identifiers. These are source-dependent identifiers (e.g., brand names, generic names), which are usually assigned to the compound at the point of registration. The correctness of non-systematic identifiers (i.e., whether an identifier matches the associated structure) can only be assessed manually, which is cumbersome, but it is possible to automatically check their ambiguity (i.e., whether an identifier matches more than one structure). In this study we have quantified the ambiguity of non-systematic identifiers within and between eight widely used chemical databases. We also studied the effect of chemical structure standardization on reducing the ambiguity of non-systematic identifiers. The ambiguity of non-systematic identifiers within databases varied from 0.1 to 15.2 % (median 2.5 %). Standardization reduced the ambiguity only to a small extent for most databases. A wide range of ambiguity existed for non-systematic identifiers that are shared between databases (17.7-60.2 %, median of 40.3 %). Removing stereochemistry information provided the largest reduction in ambiguity across databases (median reduction 13.7 percentage points). Ambiguity of non-systematic identifiers within chemical databases is generally low, but ambiguity of non-systematic identifiers that are shared between databases, is high. Chemical structure standardization reduces the ambiguity to a limited extent. Our findings can help to improve database integration, curation, and maintenance.
Ahmed, Zeeshan; Zeeshan, Saman; Fleischmann, Pauline; Rössler, Wolfgang; Dandekar, Thomas
2014-01-01
Field studies on arthropod ecology and behaviour require simple and robust monitoring tools, preferably with direct access to an integrated database. We have developed and here present a database tool allowing smart-phone based monitoring of arthropods. This smart phone application provides an easy solution to collect, manage and process the data in the field which has been a very difficult task for field biologists using traditional methods. To monitor our example species, the desert ant Cataglyphis fortis, we considered behavior, nest search runs, feeding habits and path segmentations including detailed information on solar position and azimuth calculation, ant orientation and time of day. For this we established a user friendly database system integrating the Ant-App-DB with a smart phone and tablet application, combining experimental data manipulation with data management and providing solar position and timing estimations without any GPS or GIS system. Moreover, the new desktop application Dataplus allows efficient data extraction and conversion from smart phone application to personal computers, for further ecological data analysis and sharing. All features, software code and database as well as Dataplus application are made available completely free of charge and sufficiently generic to be easily adapted to other field monitoring studies on arthropods or other migratory organisms. The software applications Ant-App-DB and Dataplus described here are developed using the Android SDK, Java, XML, C# and SQLite Database.
BLAST and FASTA similarity searching for multiple sequence alignment.
Pearson, William R
2014-01-01
BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.
Ahmed, Zeeshan; Zeeshan, Saman; Fleischmann, Pauline; Rössler, Wolfgang; Dandekar, Thomas
2015-01-01
Field studies on arthropod ecology and behaviour require simple and robust monitoring tools, preferably with direct access to an integrated database. We have developed and here present a database tool allowing smart-phone based monitoring of arthropods. This smart phone application provides an easy solution to collect, manage and process the data in the field which has been a very difficult task for field biologists using traditional methods. To monitor our example species, the desert ant Cataglyphis fortis, we considered behavior, nest search runs, feeding habits and path segmentations including detailed information on solar position and azimuth calculation, ant orientation and time of day. For this we established a user friendly database system integrating the Ant-App-DB with a smart phone and tablet application, combining experimental data manipulation with data management and providing solar position and timing estimations without any GPS or GIS system. Moreover, the new desktop application Dataplus allows efficient data extraction and conversion from smart phone application to personal computers, for further ecological data analysis and sharing. All features, software code and database as well as Dataplus application are made available completely free of charge and sufficiently generic to be easily adapted to other field monitoring studies on arthropods or other migratory organisms. The software applications Ant-App-DB and Dataplus described here are developed using the Android SDK, Java, XML, C# and SQLite Database. PMID:25977753
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher,D
Concerns about the long-term viability of SFS as the metadata store for HPSS have been increasing. A concern that Transarc may discontinue support for SFS motivates us to consider alternative means to store HPSS metadata. The obvious alternative is a commercial database. Commercial databases have the necessary characteristics for storage of HPSS metadata records. They are robust and scalable and can easily accommodate the volume of data that must be stored. They provide programming interfaces, transactional semantics and a full set of maintenance and performance enhancement tools. A team was organized within the HPSS project to study and recommend anmore » approach for the replacement of SFS. Members of the team are David Fisher, Jim Minton, Donna Mecozzi, Danny Cook, Bart Parliman and Lynn Jones. We examined several possible solutions to the problem of replacing SFS, and recommended on May 22, 2000, in a report to the HPSS Technical and Executive Committees, to change HPSS into a database application over either Oracle or DB2. We recommended either Oracle or DB2 on the basis of market share and technical suitability. Oracle and DB2 are dominant offerings in the market, and it is in the best interest of HPSS to use a major player's product. Both databases provide a suitable programming interface. Transaction management functions, support for multi-threaded clients and data manipulation languages (DML) are available. These findings were supported in meetings held with technical experts from both companies. In both cases, the evidence indicated that either database would provide the features needed to host HPSS.« less
Managing troubled data: Coastal data partnerships smooth data integration
Hale, S.S.; Hale, Miglarese A.; Bradley, M.P.; Belton, T.J.; Cooper, L.D.; Frame, M.T.; Friel, C.A.; Harwell, L.M.; King, R.E.; Michener, W.K.; Nicolson, D.T.; Peterjohn, B.G.
2003-01-01
Understanding the ecology, condition, and changes of coastal areas requires data from many sources. Broad-scale and long-term ecological questions, such as global climate change, biodiversity, and cumulative impacts of human activities, must be addressed with databases that integrate data from several different research and monitoring programs. Various barriers, including widely differing data formats, codes, directories, systems, and metadata used by individual programs, make such integration troublesome. Coastal data partnerships, by helping overcome technical, social, and organizational barriers, can lead to a better understanding of environmental issues, and may enable better management decisions. Characteristics of successful data partnerships include a common need for shared data, strong collaborative leadership, committed partners willing to invest in the partnership, and clear agreements on data standards and data policy. Emerging data and metadata standards that become widely accepted are crucial. New information technology is making it easier to exchange and integrate data. Data partnerships allow us to create broader databases than would be possible for any one organization to create by itself.
Managing troubled data: coastal data partnerships smooth data integration.
Hale, Stephen S; Miglarese, Anne Hale; Bradley, M Patricia; Belton, Thomas J; Cooper, Larry D; Frame, Michael T; Friel, Christopher A; Harwell, Linda M; King, Robert E; Michener, William K; Nicolson, David T; Peterjohn, Bruce G
2003-01-01
Understanding the ecology, condition, and changes of coastal areas requires data from many sources. Broad-scale and long-term ecological questions, such as global climate change, biodiversity, and cumulative impacts of human activities, must be addressed with databases that integrate data from several different research and monitoring programs. Various barriers, including widely differing data formats, codes, directories, systems, and metadata used by individual programs, make such integration troublesome. Coastal data partnerships, by helping overcome technical, social, and organizational barriers, can lead to a better understanding of environmental issues, and may enable better management decisions. Characteristics of successful data partnerships include a common need for shared data, strong collaborative leadership, committed partners willing to invest in the partnership, and clear agreements on data standards and data policy. Emerging data and metadata standards that become widely accepted are crucial. New information technology is making it easier to exchange and integrate data. Data partnerships allow us to create broader databases than would be possible for any one organization to create by itself.
NASA Technical Reports Server (NTRS)
Rilee, Michael Lee; Kuo, Kwo-Sen
2017-01-01
The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.
The precategorical nature of visual short-term memory.
Quinlan, Philip T; Cohen, Dale J
2016-11-01
We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the remaining experiments, displays contained different images of items from the same higher-level category (e.g., food: a bagel, a sandwich, a pizza). Evidence from the later experiments did suggest that participants were sensitive to the categorical relations present in the displays. However, when separate measures of sensitivity and bias were computed, the data revealed no effects on sensitivity, but a greater tendency to respond positively to noncategory items relative to items from the depicted category. Across all experiments, there was no evidence that items from a common category were better remembered than unique items. Previous work has shown that principles of perceptual organization do affect the storage and maintenance of tbr items. The present work shows that there are no corresponding conceptual principles of organization in VSTM. It is concluded that the sort of VSTM tapped by single probe recognition methods is precategorical in nature. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Semantically Enabling Knowledge Representation of Metamorphic Petrology Data
NASA Astrophysics Data System (ADS)
West, P.; Fox, P. A.; Spear, F. S.; Adali, S.; Nguyen, C.; Hallett, B. W.; Horkley, L. K.
2012-12-01
More and more metamorphic petrology data is being collected around the world, and is now being organized together into different virtual data portals by means of virtual organizations. For example, there is the virtual data portal Petrological Database (PetDB, http://www.petdb.org) of the Ocean Floor that is organizing scientific information about geochemical data of ocean floor igneous and metamorphic rocks; and also The Metamorphic Petrology Database (MetPetDB, http://metpetdb.rpi.edu) that is being created by a global community of metamorphic petrologists in collaboration with software engineers and data managers at Rensselaer Polytechnic Institute. The current focus is to provide the ability for scientists and researchers to register their data and search the databases for information regarding sample collections. What we present here is the next step in evolution of the MetPetDB portal, utilizing semantically enabled features such as discovery, data casting, faceted search, knowledge representation, and linked data as well as organizing information about the community and collaboration within the virtual community itself. We take the information that is currently represented in a relational database and make it available through web services, SPARQL endpoints, semantic and triple-stores where inferencing is enabled. We will be leveraging research that has taken place in virtual observatories, such as the Virtual Solar Terrestrial Observatory (VSTO) and the Biological and Chemical Oceanography Data Management Office (BCO-DMO); vocabulary work done in various communities such as Observations and Measurements (ISO 19156), FOAF (Friend of a Friend), Bibo (Bibliography Ontology), and domain specific ontologies; enabling provenance traces of samples and subsamples using the different provenance ontologies; and providing the much needed linking of data from the various research organizations into a common, collaborative virtual observatory. In addition to better representing and presenting the actual data, we also look to organize and represent the knowledge information and expertise behind the data. Domain experts hold a lot of knowledge in their minds, in their presentations and publications, and elsewhere. Not only is this a technical issue, this is also a social issue in that we need to be able to encourage the domain experts to share their knowledge in a way that can be searched and queried over. With this additional focus in MetPetDB the site can be used more efficiently by other domain experts, but can also be utilized by non-specialists as well in order to educate people of the importance of the work being done as well as enable future domain experts.
Prieto, Claudia I; Palau, María J; Martina, Pablo; Achiary, Carlos; Achiary, Andrés; Bettiol, Marisa; Montanaro, Patricia; Cazzola, María L; Leguizamón, Mariana; Massillo, Cintia; Figoli, Cecilia; Valeiras, Brenda; Perez, Silvia; Rentería, Fernando; Diez, Graciela; Yantorno, Osvaldo M; Bosch, Alejandra
2016-01-01
The epidemiological and clinical management of cystic fibrosis (CF) patients suffering from acute pulmonary exacerbations or chronic lung infections demands continuous updating of medical and microbiological processes associated with the constant evolution of pathogens during host colonization. In order to monitor the dynamics of these processes, it is essential to have expert systems capable of storing and subsequently extracting the information generated from different studies of the patients and microorganisms isolated from them. In this work we have designed and developed an on-line database based on an information system that allows to store, manage and visualize data from clinical studies and microbiological analysis of bacteria obtained from the respiratory tract of patients suffering from cystic fibrosis. The information system, named Cystic Fibrosis Cloud database is available on the http://servoy.infocomsa.com/cfc_database site and is composed of a main database and a web-based interface, which uses Servoy's product architecture based on Java technology. Although the CFC database system can be implemented as a local program for private use in CF centers, it can also be used, updated and shared by different users who can access the stored information in a systematic, practical and safe manner. The implementation of the CFC database could have a significant impact on the monitoring of respiratory infections, the prevention of exacerbations, the detection of emerging organisms, and the adequacy of control strategies for lung infections in CF patients. Copyright © 2015 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
A computational model to protect patient data from location-based re-identification.
Malin, Bradley
2007-07-01
Health care organizations must preserve a patient's anonymity when disclosing personal data. Traditionally, patient identity has been protected by stripping identifiers from sensitive data such as DNA. However, simple automated methods can re-identify patient data using public information. In this paper, we present a solution to prevent a threat to patient anonymity that arises when multiple health care organizations disclose data. In this setting, a patient's location visit pattern, or "trail", can re-identify seemingly anonymous DNA to patient identity. This threat exists because health care organizations (1) cannot prevent the disclosure of certain types of patient information and (2) do not know how to systematically avoid trail re-identification. In this paper, we develop and evaluate computational methods that health care organizations can apply to disclose patient-specific DNA records that are impregnable to trail re-identification. To prevent trail re-identification, we introduce a formal model called k-unlinkability, which enables health care administrators to specify different degrees of patient anonymity. Specifically, k-unlinkability is satisfied when the trail of each DNA record is linkable to no less than k identified records. We present several algorithms that enable health care organizations to coordinate their data disclosure, so that they can determine which DNA records can be shared without violating k-unlinkability. We evaluate the algorithms with the trails of patient populations derived from publicly available hospital discharge databases. Algorithm efficacy is evaluated using metrics based on real world applications, including the number of suppressed records and the number of organizations that disclose records. Our experiments indicate that it is unnecessary to suppress all patient records that initially violate k-unlinkability. Rather, only portions of the trails need to be suppressed. For example, if each hospital discloses 100% of its data on patients diagnosed with cystic fibrosis, then 48% of the DNA records are 5-unlinkable. A naïve solution would suppress the 52% of the DNA records that violate 5-unlinkability. However, by applying our protection algorithms, the hospitals can disclose 95% of the DNA records, all of which are 5-unlinkable. Similar findings hold for all populations studied. This research demonstrates that patient anonymity can be formally protected in shared databases. Our findings illustrate that significant quantities of patient-specific data can be disclosed with provable protection from trail re-identification. The configurability of our methods allows health care administrators to quantify the effects of different levels of privacy protection and formulate policy accordingly.
The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.
Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil
2016-06-17
Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.
Menditto, Enrica; Bolufer De Gea, Angela; Cahir, Caitriona; Marengoni, Alessandra; Riegler, Salvatore; Fico, Giuseppe; Costa, Elisio; Monaco, Alessandro; Pecorelli, Sergio; Pani, Luca; Prados-Torres, Alexandra
2016-01-01
Computerized health care databases have been widely described as an excellent opportunity for research. The availability of "big data" has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on "adherence to prescription and medical plans" identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners with the aim of improving data sharing on a European level. A total of six databases belonging to three different European countries (Spain, Republic of Ireland, and Italy) were included in the analysis. Preliminary results suggest that there are some similarities. However, these results should be applied in different contexts and European countries, supporting the idea that large European studies should be designed in order to get the most of already available databases.
Human Connectome Project Informatics: quality control, database services, and data visualization
Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.
2013-01-01
The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591
Developing Governance for Federated Community-based EHR Data Sharing
Lin, Ching-Ping; Stephens, Kari A.; Baldwin, Laura-Mae; Keppel, Gina A.; Whitener, Ron J.; Echo-Hawk, Abigail; Korngiebel, Diane
2014-01-01
Bi-directional translational pathways between scientific discoveries and primary care are crucial for improving individual patient care and population health. The Data QUEST pilot project is a program supporting data sharing amongst community based primary care practices and is built on a technical infrastructure to share electronic health record data. We developed a set of governance requirements from interviewing and collaborating with partner organizations. Recommendations from our partner organizations included: 1) partner organizations can physically terminate the link to the data sharing network and only approved data exits the local site; 2) partner organizations must approve or reject each query; 3) partner organizations and researchers must respect local processes, resource restrictions, and infrastructures; and 4) partner organizations can be seamlessly added and removed from any individual data sharing query or the entire network. PMID:25717404
The Strabo digital data system for Structural Geology and Tectonics
NASA Astrophysics Data System (ADS)
Tikoff, Basil; Newman, Julie; Walker, J. Doug; Williams, Randy; Michels, Zach; Andrews, Joseph; Bunse, Emily; Ash, Jason; Good, Jessica
2017-04-01
We are developing the Strabo data system for the structural geology and tectonics community. The data system will allow researchers to share primary data, apply new types of analytical procedures (e.g., statistical analysis), facilitate interaction with other geology communities, and allow new types of science to be done. The data system is based on a graph database, rather than relational database approach, to increase flexibility and allow geologically realistic relationships between observations and measurements. Development is occurring on: 1) A field-based application that runs on iOS and Android mobile devices and can function in either internet connected or disconnected environments; and 2) A desktop system that runs only in connected settings and directly addresses the back-end database. The field application also makes extensive use of images, such as photos or sketches, which can be hierarchically arranged with encapsulated field measurements/observations across all scales. The system also accepts Shapefile, GEOJSON, KML formats made in ArcGIS and QGIS, and will allow export to these formats as well. Strabo uses two main concepts to organize the data: Spots and Tags. A Spot is any observation that characterizes a specific area. Below GPS resolution, a Spot can be tied to an image (outcrop photo, thin section, etc.). Spots are related in a purely spatial manner (one spot encloses anther spot, which encloses another, etc.). Tags provide a linkage between conceptually related spots. Together, this organization works seamlessly with the workflow of most geologists. We are expanding this effort to include microstructural data, as well as to the disciplines of sedimentology and petrology.
Pirooznia, Mehdi; Gong, Ping; Guan, Xin; Inouye, Laura S; Yang, Kuan; Perkins, Edward J; Deng, Youping
2007-01-01
Background Eisenia fetida, commonly known as red wiggler or compost worm, belongs to the Lumbricidae family of the Annelida phylum. Little is known about its genome sequence although it has been extensively used as a test organism in terrestrial ecotoxicology. In order to understand its gene expression response to environmental contaminants, we cloned 4032 cDNAs or expressed sequence tags (ESTs) from two E. fetida libraries enriched with genes responsive to ten ordnance related compounds using suppressive subtractive hybridization-PCR. Results A total of 3144 good quality ESTs (GenBank dbEST accession number EH669363–EH672369 and EL515444–EL515580) were obtained from the raw clone sequences after cleaning. Clustering analysis yielded 2231 unique sequences including 448 contigs (from 1361 ESTs) and 1783 singletons. Comparative genomic analysis showed that 743 or 33% of the unique sequences shared high similarity with existing genes in the GenBank nr database. Provisional function annotation assigned 830 Gene Ontology terms to 517 unique sequences based on their homology with the annotated genomes of four model organisms Drosophila melanogaster, Mus musculus, Saccharomyces cerevisiae, and Caenorhabditis elegans. Seven percent of the unique sequences were further mapped to 99 Kyoto Encyclopedia of Genes and Genomes pathways based on their matching Enzyme Commission numbers. All the information is stored and retrievable at a highly performed, web-based and user-friendly relational database called EST model database or ESTMD version 2. Conclusion The ESTMD containing the sequence and annotation information of 4032 E. fetida ESTs is publicly accessible at . PMID:18047730
Bibliographic Databases Outside of the United States.
ERIC Educational Resources Information Center
McGinn, Thomas P.; And Others
1988-01-01
Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…
Shortcomings in Information Sharing Facilitates Transnational Organized Crime
2017-06-09
SHORTCOMINGS IN INFORMATION SHARING FACILITATES TRANSNATIONAL ORGANIZED CRIME A thesis presented to the Faculty of the U.S...JUN 2017 4. TITLE AND SUBTITLE Shortcomings in Information Sharing Facilitates Transnational Organized Crime 5a. CONTRACT NUMBER 5b...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Command and General Staff College ATTN: ATZL-SWD-GD Fort Leavenworth, KS 66027
Zhulin, Igor B.
2015-05-26
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhulin, Igor B.
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.
2015-01-01
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493
Jain, Anubhav; Persson, Kristin A.; Ceder, Gerbrand
2016-03-24
Materials innovations enable new technological capabilities and drive major societal advancements but have historically required long and costly development cycles. The Materials Genome Initiative (MGI) aims to greatly reduce this time and cost. Here, we focus on data reuse in the MGI and, in particular, discuss the impact of three different computational databases based on density functional theory methods to the research community. Finally, we discuss and provide recommendations on technical aspects of data reuse, outline remaining fundamental challenges, and present an outlook on the future of MGI's vision of data sharing.
Fernandez, H; Weber, J; Barnes, K; Wright, L; Levy, M
2016-01-01
The Share 35 policy for organ allocation, which was adopted in June 2013, allocates livers regionally for candidates with Model for End-Stage Liver Disease scores of 35 or greater. The authors analyzed the costs resulting from the increased movement of allografts related to this new policy. Using a sample of nine organ procurement organizations, representing 17% of the US population and 19% of the deceased donors in 2013, data were obtained on import and export costs before Share 35 implementation (June 15, 2012, to June 14, 2013) and after Share 35 implementation (June 15, 2013, to June 14, 2014). Results showed that liver import rates increased 42%, with an increased cost of 51%, while export rates increased 112%, with an increased cost of 127%. When the costs of importing and exporting allografts were combined, the total change in costs for all nine organ procurement organizations was $11 011 321 after Share 35 implementation. Extrapolating these costs nationally resulted in an increased yearly cost of $68 820 756 by population or $55 056 605 by number of organ donors. Any alternative allocation proposal needs to account for the financial implications to the transplant infrastructure. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Economic Analysis of Cyber Security
2006-07-01
vulnerability databases and track the number of incidents reported by U.S. organizations. Many of these are private organizations, such as the security...VULNERABILITY AND ATTACK ESTIMATES Numerous organizations compile vulnerability databases and patch information, and track the number of reported incidents... database / security focus Databases of vulnerabilities identifying the software versions that are susceptible, including information on the method of
Share 35 Changes Center Level Liver Acceptance Practices
Goldberg, David S.; Levine, Matthew; Karp, Seth; Gilroy, Richard; Peter, L
2017-01-01
Share 35 was implemented to provide improved access to organs for patients with MELD scores ≥35. However, little is known about the impact of Share 35 on organ offer acceptance rates. We evaluated all liver offers to adult patients that were ultimately transplanted between 1/1/2011–12/31/2015. The analyses focused on patients ranked in the top five positions of a given match run, and used multi-level mixed-effects models, clustering on individual waitlist candidate and transplant center. There was a significant interaction between Share 35 era and MELD category (p<0.001). Comparing offers to MELD score ≥35 patients, offers post-Share 35 were 36% less likely to be accepted compared to offers to MELD score ≥35 patients pre-Share 35 (adjusted OR: 0.64). There was no clinically meaningful difference in the DRI of livers that were declined for patients with an allocation MELD score ≥35 in the pre- vs post-Share 35 era. Organ offer acceptance rates for patients with an allocation MELD≥35 decreased in every region post-Share 35; the magnitude of these changes was bigger in regions 2, 3, 4, 5, 6, 7, and 11, compared to regions 8 and 9 that had regional sharing in place pre-Share 35. There were significant changes in organ offer acceptance rates at the center level pre- vs post-Share 35, and these changes varied across centers (p<0.001). Conclusions In liver transplant candidates achieving a MELD score ≥35, liver acceptance of offers declined significantly after implementation of Share 35. The alterations in behavior at the center level suggest that practice patterns changed as a direct result of Share 35. Changes in organ acceptance under even broader organ sharing (redistricting) would likely be even greater, posing major logistical and operational challenges, while potentially increasing discard rates, thus decreasing the total number of transplant nationally. PMID:28240804
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.
Transportation-markings database : traffic control devices. Part I 2, Volume 3, additional studies
DOT National Transportation Integrated Search
1998-01-01
The Database (Part I 1, 2, 3, 4) of transportation-markings: a study in communication monograph series draws together the several varios dimensions of T-M. it shares this drawing togther function with the General Classification (Part H). But, paradox...
Aboriginal Knowledge Traditions in Digital Environments
ERIC Educational Resources Information Center
Christie, Michael
2005-01-01
According to Manovich (2001), the database and the narrative are natural enemies, each competing for the same territory of human culture. Aboriginal knowledge traditions depend upon narrative through storytelling and other shared performances. The database objectifies and commodifies distillations of such performances and absorbs them into data…
DOT National Transportation Integrated Search
2001-01-01
The Database (Parts I 1, 2, 3, 4 of TRANSPORTATION-MARKINGS: A STUDY IN COMMUNICATION MONOGRAPH SERIES) draws together the several dimensions of T-M. It shares this drawmg together function with the General Classification (Part H). But, paradoxically...
Schell, Scott R
2006-02-01
Enforcement of the Health Insurance Portability and Accountability Act (HIPAA) began in April, 2003. Designed as a law mandating health insurance availability when coverage was lost, HIPAA imposed sweeping and broad-reaching protections of patient privacy. These changes dramatically altered clinical research by placing sizeable regulatory burdens upon investigators with threat of severe and costly federal and civil penalties. This report describes development of an algorithmic approach to clinical research database design based upon a central key-shared data (CK-SD) model allowing researchers to easily analyze, distribute, and publish clinical research without disclosure of HIPAA Protected Health Information (PHI). Three clinical database formats (small clinical trial, operating room performance, and genetic microchip array datasets) were modeled using standard structured query language (SQL)-compliant databases. The CK database was created to contain PHI data, whereas a shareable SD database was generated in real-time containing relevant clinical outcome information while protecting PHI items. Small (< 100 records), medium (< 50,000 records), and large (> 10(8) records) model databases were created, and the resultant data models were evaluated in consultation with an HIPAA compliance officer. The SD database models complied fully with HIPAA regulations, and resulting "shared" data could be distributed freely. Unique patient identifiers were not required for treatment or outcome analysis. Age data were resolved to single-integer years, grouping patients aged > 89 years. Admission, discharge, treatment, and follow-up dates were replaced with enrollment year, and follow-up/outcome intervals calculated eliminating original data. Two additional data fields identified as PHI (treating physician and facility) were replaced with integer values, and the original data corresponding to these values were stored in the CK database. Use of the algorithm at the time of database design did not increase cost or design effort. The CK-SD model for clinical database design provides an algorithm for investigators to create, maintain, and share clinical research data compliant with HIPAA regulations. This model is applicable to new projects and large institutional datasets, and should decrease regulatory efforts required for conduct of clinical research. Application of the design algorithm early in the clinical research enterprise does not increase cost or the effort of data collection.
Smith, Christopher Irwin; Tank, Shantel; Godsoe, William; Levenick, Jim; Strand, Eva; Esque, Todd C.; Pellmyr, Olle
2011-01-01
Comparative phylogeographic studies have had mixed success in identifying common phylogeographic patterns among co-distributed organisms. Whereas some have found broadly similar patterns across a diverse array of taxa, others have found that the histories of different species are more idiosyncratic than congruent. The variation in the results of comparative phylogeographic studies could indicate that the extent to which sympatrically-distributed organisms share common biogeographic histories varies depending on the strength and specificity of ecological interactions between them. To test this hypothesis, we examined demographic and phylogeographic patterns in a highly specialized, coevolved community – Joshua trees (Yucca brevifolia) and their associated yucca moths. This tightly-integrated, mutually interdependent community is known to have experienced significant range changes at the end of the last glacial period, so there is a strong a priori expectation that these organisms will show common signatures of demographic and distributional changes over time. Using a database of >5000 GPS records for Joshua trees, and multi-locus DNA sequence data from the Joshua tree and four species of yucca moth, we combined paleaodistribution modeling with coalescent-based analyses of demographic and phylgeographic history. We extensively evaluated the power of our methods to infer past population size and distributional changes by evaluating the effect of different inference procedures on our results, comparing our palaeodistribution models to Pleistocene-aged packrat midden records, and simulating DNA sequence data under a variety of alternative demographic histories. Together the results indicate that these organisms have shared a common history of population expansion, and that these expansions were broadly coincident in time. However, contrary to our expectations, none of our analyses indicated significant range or population size reductions at the end of the last glacial period, and the inferred demographic changes substantially predate Holocene climate changes.
Smith, C.I.; Tank, S.; Godsoe, W.; Levenick, J.; Strand, Espen; Esque, T.; Pellmyr, O.
2011-01-01
Comparative phylogeographic studies have had mixed success in identifying common phylogeographic patterns among co-distributed organisms. Whereas some have found broadly similar patterns across a diverse array of taxa, others have found that the histories of different species are more idiosyncratic than congruent. The variation in the results of comparative phylogeographic studies could indicate that the extent to which sympatrically-distributed organisms share common biogeographic histories varies depending on the strength and specificity of ecological interactions between them. To test this hypothesis, we examined demographic and phylogeographic patterns in a highly specialized, coevolved community - Joshua trees (Yucca brevifolia) and their associated yucca moths. This tightly-integrated, mutually interdependent community is known to have experienced significant range changes at the end of the last glacial period, so there is a strong a priori expectation that these organisms will show common signatures of demographic and distributional changes over time. Using a database of >5000 GPS records for Joshua trees, and multi-locus DNA sequence data from the Joshua tree and four species of yucca moth, we combined paleaodistribution modeling with coalescent-based analyses of demographic and phylgeographic history. We extensively evaluated the power of our methods to infer past population size and distributional changes by evaluating the effect of different inference procedures on our results, comparing our palaeodistribution models to Pleistocene-aged packrat midden records, and simulating DNA sequence data under a variety of alternative demographic histories. Together the results indicate that these organisms have shared a common history of population expansion, and that these expansions were broadly coincident in time. However, contrary to our expectations, none of our analyses indicated significant range or population size reductions at the end of the last glacial period, and the inferred demographic changes substantially predate Holocene climate changes.
Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses
Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu
2015-01-01
Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099
Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.
Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu
2015-01-01
Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.
Sharing chemical structures with peer-reviewed publications. Are we there yet?
In the domain of chemistry one of the greatest benefits to publishing research is that data are shared. Unfortunately, the vast majority of chemical structure data remain locked up in document form, primarily as PDF files. Despite the explosive growth of online chemical databases...
Clerkin, Kevin J; Garan, Arthur Reshad; Wayda, Brian; Givens, Raymond C; Yuzefpolskaya, Melana; Nakagawa, Shunichi; Takeda, Koji; Takayama, Hiroo; Naka, Yoshifumi; Mancini, Donna M; Colombo, Paolo C; Topkara, Veli K
2016-10-01
Low socioeconomic status (SES) is a known risk factor for heart failure, mortality among those with heart failure, and poor post heart transplant (HT) outcomes. This study sought to determine whether SES is associated with decreased waitlist survival while on left ventricular assist device (LVADs) support and after HT. A total of 3361 adult patients bridged to primary HT with an LVAD between May 2004 and April 2014 were identified in the UNOS database (United Network for Organ Sharing). SES was measured using the Agency for Healthcare Research and Quality SES index using data from the 2014 American Community Survey. In the study cohort, SES did not have an association with the combined end point of death or delisting on LVAD support (P=0.30). In a cause-specific unadjusted model, those in the top (hazard ratio, 1.55; 95% confidence interval, 1.14-2.11; P=0.005) and second greatest SES quartile (hazard ratio 1.50; 95% confidence interval, 1.10-2.04; P=0.01) had an increased risk of death on device support compared with the lowest SES quartile. Adjusting for clinical risk factors mitigated the increased risk. There was no association between SES and complications. Post-HT survival, both crude and adjusted, was decreased for patients in the lowest quartile of SES index compared with all other SES quartiles. Freedom from waitlist death or delisting was not affected by SES. Patients with a higher SES had an increased unadjusted risk of waitlist mortality during LVAD support, which was mitigated by adjusting for increased comorbid conditions. Low SES was associated with worse post-HT outcomes. Further study is needed to confirm and understand a differential effect of SES on post-transplant outcomes that was not seen during LVAD support before HT. © 2016 American Heart Association, Inc.
Bittle, Gregory J; Sanchez, Pablo G; Kon, Zachary N; Claire Watkins, A; Rajagopal, Keshava; Pierson, Richard N; Gammie, James S; Griffith, Bartley P
2013-08-01
Current lung transplantation guidelines stipulate that the ideal donor is aged younger than 55 years, but several institutions have reported that outcomes using donors aged 55 years and older are comparable with those of younger donors. We retrospectively reviewed the United Network for Organ Sharing (UNOS) database to identify all adult lung transplants between 2000 and 2010 in the United States. Patients were stratified by donor age 18 to 34 (reference), 35 to 54, 55 to 64, and ≥ 65 years. Primary outcomes included survival at 30 days and at 1, 3, and 5 years and rates of bronchiolitis obliterans syndrome (BOS). Survival was assessed using the Kaplan-Meier method. Risk factors for mortality were identified by multivariable Cox and logistic regression. We identified 10,666 recipients with median follow-up of 3 years (range, 0-10 years). Older donors were more likely to have died of cardiovascular or cerebrovascular causes, but there were no differences in recipient diagnosis, lung allocation score, or incidence of BOS as a function of donor age. The use of donors aged 55 to 64 years was not a risk factor for mortality at 1 year (odds ratio, 1.1; p = 0.304) or 3 years (odds ratio, 0.923; p = 0.571) compared with the reference group; however, use of donors aged > 65 years was associated with increased mortality at both time points (odds ratio, 2.8 and 2.4, p < 0.02). Outcomes after lung transplantation using donors aged 55 to 64 years were similar to those observed with donors meeting conventional age criteria. Donors aged ≥ 65 years, however, were associated with decreased intermediate-term survival, although there was no increased risk of BOS for this group. Copyright © 2013 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-16
...-Regulatory Organizations; NYSE Arca, Inc.; Order Approving a Proposed Rule Change To List and Trade Shares of...,\\2\\ a proposed rule change to list and trade shares (``Shares'') of the SPDR Nuveen S&P High Yield... shares (``Shares'') under NYSE Arca Equities Rule 5.2(j)(3), Commentary .02, which governs the listing...
Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas
2010-06-28
Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.
Kottke, Thomas E; Pronk, Nico; Zinkel, Andrew R; Isham, George J
2017-01-01
Health care organizations can magnify the impact of their community service and other philanthropic activities by implementing programs that create shared value. By definition, shared value is created when an initiative generates benefit for the sponsoring organization while also generating societal and community benefit. Because the programs generate benefit for the sponsoring organizations, the magnitude of any particular initiative is limited only by the market for the benefit and not the resources that are available for philanthropy.In this article we use three initiatives in sectors other than health care to illustrate the concept of shared value. We also present examples of five types of shared value programs that are sponsored by health care organizations: telehealth, worksite health promotion, school-based health centers, green and healthy housing, and clean and green health services. On the basis of the innovativeness of health care organizations that have already implemented programs that create shared value, we conclude that the opportunities for all health care organizations to create positive impact for individuals and communities through similar programs is large, and the limits have yet to be defined.
Decentralized Data Sharing of Tissue Microarrays for Investigative Research in Oncology
Chen, Wenjin; Schmidt, Cristina; Parashar, Manish; Reiss, Michael; Foran, David J.
2007-01-01
Tissue microarray technology (TMA) is a relatively new approach for efficiently and economically assessing protein and gene expression across large ensembles of tissue specimens. Tissue microarray technology holds great potential for reducing the time and cost associated with conducting research in tissue banking, proteomics, and outcome studies. However, the sheer volume of images and other data generated from even limited studies involving tissue microarrays quickly approach the processing capacity and resources of a division or department. This challenge is compounded by the fact that large-scale projects in several areas of modern research rely upon multi-institutional efforts in which investigators and resources are spread out over multiple campuses, cities, and states. To address some of the data management issues several leading institutions have begun to develop their own “in-house” systems, independently, but such data will be only minimally useful if it isn’t accessible to others in the scientific community. Investigators at different institutions studying the same or related disorders might benefit from the synergy of sharing results. To facilitate sharing of TMA data across different database implementations, the Technical Standards Committee of the Association for Pathology Informatics organized workshops in efforts to establish a standardized TMA data exchange specification. The focus of our research does not relate to the establishment of standards for exchange, but rather builds on these efforts and concentrates on the design, development and deployment of a decentralized collaboratory for the unsupervised characterization, and seamless and secure discovery and sharing of TMA data. Specifically, we present a self-organizing, peer-to-peer indexing and discovery infrastructure for quantitatively assessing digitized TMA’s. The system utilizes a novel, optimized decentralized search engine that supports flexible querying, while guaranteeing that once information has been stored in the system, it will be found with bounded costs. PMID:19081778
Patricio, Harmony C.; Ainsley, Shaara M.; Andersen, Matthew E.; Beeman, John W.; Hewitt, David A.
2012-01-01
The Mekong River is one of the most biologically diverse rivers in the world, and it supports the most productive freshwater fisheries in the world. Millions of people in the Lower Mekong River Basin (LMB) countries of the Union of Myanmar (Burma), Lao People’s Democratic Republic, the Kingdom of Thailand, the Kingdom of Cambodia, and the Socialist Republic of Vietnam rely on the fisheries of the basin to provide a source of protein. The Mekong Fish Network Workshop was convened in Phnom Penh, Cambodia, in February 2012 to discuss the potential for coordinating fisheries monitoring among nations and the utility of establishing standard methods for short- and long-term monitoring and data sharing throughout the LMB. The concept for this network developed out of a frequently cited need for fisheries researchers in the LMB to share their knowledge with other scientists and decisionmakers. A fish monitoring network could be a valuable forum for researchers to exchange ideas, store data, or access general information regarding fisheries studies in the LMB region. At the workshop, representatives from governments, nongovernmental organizations, and universities, as well as participating foreign technical experts, cited a great need for more international cooperation and technical support among them. Given the limited staff and resources of many institutions in the LMB, the success of the proposed network would depend on whether it could offer tools that would provide benefits to network participants. A potential tool discussed at the workshop was a user-friendly, Web-accessible portal and database that could help streamline data entry and storage at the institutional level, as well as facilitate communication and data sharing among institutions. The workshop provided a consensus to establish pilot standardized data collection and database efforts that will be further reviewed by the workshop participants. Overall, workshop participants agreed that this is the type of support that is greatly needed to answer their most pressing questions and to enable local researchers and resource managers to monitor and sustain the valuable and diverse aquatic life of the Mekong River.
DNApod: DNA polymorphism annotation database from next-generation sequence read archives.
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.
DNApod: DNA polymorphism annotation database from next-generation sequence read archives
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
The transcriptome of Lutzomyia longipalpis (Diptera: Psychodidae) male reproductive organs.
Azevedo, Renata V D M; Dias, Denise B S; Bretãs, Jorge A C; Mazzoni, Camila J; Souza, Nataly A; Albano, Rodolpho M; Wagner, Glauber; Davila, Alberto M R; Peixoto, Alexandre A
2012-01-01
It has been suggested that genes involved in the reproductive biology of insect disease vectors are potential targets for future alternative methods of control. Little is known about the molecular biology of reproduction in phlebotomine sand flies and there is no information available concerning genes that are expressed in male reproductive organs of Lutzomyia longipalpis, the main vector of American visceral leishmaniasis and a species complex. We generated 2678 high quality ESTs ("Expressed Sequence Tags") of L. longipalpis male reproductive organs that were grouped in 1391 non-redundant sequences (1136 singlets and 255 clusters). BLAST analysis revealed that only 57% of these sequences share similarity with a L. longipalpis female EST database. Although no more than 36% of the non-redundant sequences showed similarity to protein sequences deposited in databases, more than half of them presented the best-match hits with mosquito genes. Gene ontology analysis identified subsets of genes involved in biological processes such as protein biosynthesis and DNA replication, which are probably associated with spermatogenesis. A number of non-redundant sequences were also identified as putative male reproductive gland proteins (mRGPs), also known as male accessory gland protein genes (Acps). The transcriptome analysis of L. longipalpis male reproductive organs is one step further in the study of the molecular basis of the reproductive biology of this important species complex. It has allowed the identification of genes potentially involved in spermatogenesis as well as putative mRGPs sequences, which have been studied in many insect species because of their effects on female post-mating behavior and physiology and their potential role in sexual selection and speciation. These data open a number of new avenues for further research in the molecular and evolutionary reproductive biology of sand flies.
The Transcriptome of Lutzomyia longipalpis (Diptera: Psychodidae) Male Reproductive Organs
Bretãs, Jorge A. C.; Mazzoni, Camila J.; Souza, Nataly A.; Albano, Rodolpho M.; Wagner, Glauber; Davila, Alberto M. R.; Peixoto, Alexandre A.
2012-01-01
Background It has been suggested that genes involved in the reproductive biology of insect disease vectors are potential targets for future alternative methods of control. Little is known about the molecular biology of reproduction in phlebotomine sand flies and there is no information available concerning genes that are expressed in male reproductive organs of Lutzomyia longipalpis, the main vector of American visceral leishmaniasis and a species complex. Methods/Principal Findings We generated 2678 high quality ESTs (“Expressed Sequence Tags”) of L. longipalpis male reproductive organs that were grouped in 1391 non-redundant sequences (1136 singlets and 255 clusters). BLAST analysis revealed that only 57% of these sequences share similarity with a L. longipalpis female EST database. Although no more than 36% of the non-redundant sequences showed similarity to protein sequences deposited in databases, more than half of them presented the best-match hits with mosquito genes. Gene ontology analysis identified subsets of genes involved in biological processes such as protein biosynthesis and DNA replication, which are probably associated with spermatogenesis. A number of non-redundant sequences were also identified as putative male reproductive gland proteins (mRGPs), also known as male accessory gland protein genes (Acps). Conclusions The transcriptome analysis of L. longipalpis male reproductive organs is one step further in the study of the molecular basis of the reproductive biology of this important species complex. It has allowed the identification of genes potentially involved in spermatogenesis as well as putative mRGPs sequences, which have been studied in many insect species because of their effects on female post-mating behavior and physiology and their potential role in sexual selection and speciation. These data open a number of new avenues for further research in the molecular and evolutionary reproductive biology of sand flies. PMID:22496818
Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.
Richard, Ann M; Williams, ClarLynda R
2002-01-29
The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.
Shared patients: multiple health and social care contact.
Keene, J; Swift, L; Bailey, S; Janacek, G
2001-07-01
The paper describes results from the 'Tracking Project', a new method for examining agency overlap, repeat service use and shared clients/patients amongst social and health care agencies in the community. This is the first project in this country to combine total population databases from a range of social, health care and criminal justice agencies to give a multidisciplinary database for one county (n = 97,162 cases), through standardised anonymisation of agency databases, using SOUNDEX, a software programme. A range of 20 community social and health care agencies were shown to have a large overlap with each other in a two-year period, indicating high proportions of shared patients/clients. Accident and Emergency is used as an example of major overlap: 16.2% (n = 39,992) of persons who attended a community agency had attended Accident and Emergency as compared to 8.2% (n = 775,000) of the total population of the county. Of these, 96% who had attended seven or more different community agencies had also attended Accident and Emergency. Further statistical analysis of Accident and Emergency attendance as a characteristic of community agency populations (n = 39,992) revealed that increasing frequency of attendance at Accident and Emergency was very strongly associated with increasing use of other services. That is, the patients that repeatedly attend Accident and Emergency are much more likely to attend more other agencies, indicating the possibility that they share more problematic or difficult patients. Research questions arising from these data are discussed and future research methods suggested in order to derive predictors from the database and develop screening instruments to identify multiple agency attenders for targeting or multidisciplinary working. It is suggested that Accident and Emergency attendance might serve as an important predictor of multiple agency attendance.
ClearedLeavesDB: an online database of cleared plant leaf images
2014-01-01
Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985
ClearedLeavesDB: an online database of cleared plant leaf images.
Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S
2014-03-28
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
In the domain of chemistry one of the greatest benefits to publishing research is that data are shared. Unfortunately, the vast majority of chemical structure data remain locked up in document form, primarily as PDF files. Despite the explosive growth of online chemical databases...
Operator Influence of Unexploded Ordnance Sensor Technologies
2007-03-01
chart display ActiveX control Mscomct2.dll – date/time display ActiveX control Pnpscr.dll – Systran SCRAMNet replicated shared memory device...response value database rgm_p2.dll – Phase 2 shared memory API and implementation Commercial components StripM.ocx – strip chart display ActiveX
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.
Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W
2014-01-01
Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.
The Global Genome Biodiversity Network (GGBN) Data Standard specification.
Droege, G; Barker, K; Seberg, O; Coddington, J; Benson, E; Berendsohn, W G; Bunk, B; Butler, C; Cawsey, E M; Deck, J; Döring, M; Flemons, P; Gemeinholzer, B; Güntsch, A; Hollowell, T; Kelbert, P; Kostadinov, I; Kottmann, R; Lawlor, R T; Lyal, C; Mackenzie-Dodds, J; Meyer, C; Mulcahy, D; Nussbeck, S Y; O'Tuama, É; Orrell, T; Petersen, G; Robertson, T; Söhngen, C; Whitacre, J; Wieczorek, J; Yilmaz, P; Zetzsche, H; Zhang, Y; Zhou, X
2016-01-01
Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today's ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard.Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard. © The Author(s) 2016. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.
An adaptable XML based approach for scientific data management and integration
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-03-01
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
An Adaptable XML Based Approach for Scientific Data Management and Integration.
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-02-20
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
Lee, Chris; Austin, Michael J
2012-01-01
Building on the literature related to evidence-based practice, knowledge management, and learning organizations, this cross-case analysis presents twelve works-in-progress in ten local public human service organizations seeking to develop their own knowledge sharing systems. The data for this cross-case analysis can be found in the various contributions to this Special Issue. The findings feature the developmental aspects of building a learning organization that include knowledge sharing systems featuring transparency, self-assessment, and dissemination and utilization. Implications for practice focus on the structure and processes involved in building knowledge sharing teams inside public human service organizations. Copyright © Taylor & Francis Group, LLC
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.
Enhancements to Demilitarization Process Maps Program (ProMap)
2016-10-14
map tool, ProMap, was improved by implementing new features, and sharing data with MIDAS and AMDIT databases . Specifically, process efficiency was...improved by 1) providing access to APE information contained in the AMDIT database directly from inside ProMap when constructing a process map, 2...what equipment can be efficiently used to demil a particular munition. Associated with this task was the upgrade of the AMDIT database so that
LeishCyc: a guide to building a metabolic pathway database and visualization of metabolomic data.
Saunders, Eleanor C; MacRae, James I; Naderer, Thomas; Ng, Milica; McConville, Malcolm J; Likić, Vladimir A
2012-01-01
The complexity of the metabolic networks in even the simplest organisms has raised new challenges in organizing metabolic information. To address this, specialized computer frameworks have been developed to capture, manage, and visualize metabolic knowledge. The leading databases of metabolic information are those organized under the umbrella of the BioCyc project, which consists of the reference database MetaCyc, and a number of pathway/genome databases (PGDBs) each focussed on a specific organism. A number of PGDBs have been developed for bacterial, fungal, and protozoan pathogens, greatly facilitating dissection of the metabolic potential of these organisms and the identification of new drug targets. Leishmania are protozoan parasites belonging to the family Trypanosomatidae that cause a broad spectrum of diseases in humans. In this work we use the LeishCyc database, the BioCyc database for Leishmania major, to describe how to build a BioCyc database from genomic sequences and associated annotations. By using metabolomic data generated in our group, we show how such databases can be utilized to elucidate specific changes in parasite metabolism.
Menditto, Enrica; Bolufer De Gea, Angela; Cahir, Caitriona; Marengoni, Alessandra; Riegler, Salvatore; Fico, Giuseppe; Costa, Elisio; Monaco, Alessandro; Pecorelli, Sergio; Pani, Luca; Prados-Torres, Alexandra
2016-01-01
Computerized health care databases have been widely described as an excellent opportunity for research. The availability of “big data” has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on “adherence to prescription and medical plans” identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners with the aim of improving data sharing on a European level. A total of six databases belonging to three different European countries (Spain, Republic of Ireland, and Italy) were included in the analysis. Preliminary results suggest that there are some similarities. However, these results should be applied in different contexts and European countries, supporting the idea that large European studies should be designed in order to get the most of already available databases. PMID:27358570
42 CFR 422.458 - Risk sharing with regional MA organizations for 2006 and 2007.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Risk sharing with regional MA organizations for... Special Rules for MA Regional Plans § 422.458 Risk sharing with regional MA organizations for 2006 and 2007. (a) Terminology. For purposes of this section— Allowable costs means, with respect to an MA...
42 CFR 422.458 - Risk sharing with regional MA organizations for 2006 and 2007.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Risk sharing with regional MA organizations for... Special Rules for MA Regional Plans § 422.458 Risk sharing with regional MA organizations for 2006 and 2007. (a) Terminology. For purposes of this section— Allowable costs means, with respect to an MA...
42 CFR 422.458 - Risk sharing with regional MA organizations for 2006 and 2007.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Risk sharing with regional MA organizations for... for MA Regional Plans § 422.458 Risk sharing with regional MA organizations for 2006 and 2007. (a) Terminology. For purposes of this section— Allowable costs means, with respect to an MA regional plan offered...
42 CFR 422.458 - Risk sharing with regional MA organizations for 2006 and 2007.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Risk sharing with regional MA organizations for... Special Rules for MA Regional Plans § 422.458 Risk sharing with regional MA organizations for 2006 and 2007. (a) Terminology. For purposes of this section— Allowable costs means, with respect to an MA...
DOT National Transportation Integrated Search
2009-01-01
The Database demonstrates the unity and commonality of T-M but presents each one in its separate state. Yet in that process the full panopoly of T-M is unfolded including their shared and connected state. There are thousands of Trasportation-Markings...
Code of Federal Regulations, 2010 CFR
2010-10-01
....2 GHz to 12.7 GHz band. (a) NGSO FSS licensees shall maintain a subscriber database in a format that... database to enable the MVDDS licensee to determine whether the proposed MVDDS transmitting site meets the...
Pritchard, Emma
2001-05-01
The Royal College of Nursing Gerontological Nursing Programme is compiling a database of nurses in the United Kingdom and Eire who are using the RCN Assessment Tool for older people. This database could be used for nurses using the tool to network with each other, share issues and keep nurses in touch with any developments regarding the tool.
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
... for you to prevent causing harmful medication interactions. Organization tips Get into a routine of taking your ... Organ Sharing , a non-profit 501(c)(3) organization | Guidestar | Sitemap | Legal Share This https://www.facebook. ...
Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A
2014-01-01
We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.
Recent ride-sharing research and policy findings. Transportation Research Record
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehranian, M.; Wachs, M.; Shoup, D.
1987-01-01
The five papers in the report deal with the following areas: parking cost and mode choices among downtown workers: a case study; duration of carpool and vanpool usage by clients of rides; a ride-sharing market analysis survey of commuter attitudes and behavior at a major suburban employment center; alternative access modes data-base project; formulating ride-sharing goals for transportation and air-quality plans: Southern California as a case study.
DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...
The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s
Stade, Björn; Seelow, Dominik; Thomsen, Ingo; Krawczak, Michael; Franke, Andre
2014-01-01
Next Generation Sequencing (NGS) of whole exomes or genomes is increasingly being used in human genetic research and diagnostics. Sharing NGS data with third parties can help physicians and researchers to identify causative or predisposing mutations for a specific sample of interest more efficiently. In many cases, however, the exchange of such data may collide with data privacy regulations. GrabBlur is a newly developed tool to aggregate and share NGS-derived single nucleotide variant (SNV) data in a public database, keeping individual samples unidentifiable. In contrast to other currently existing SNV databases, GrabBlur includes phenotypic information and contact details of the submitter of a given database entry. By means of GrabBlur human geneticists can securely and easily share SNV data from resequencing projects. GrabBlur can ease the interpretation of SNV data by offering basic annotations, genotype frequencies and in particular phenotypic information - given that this information was shared - for the SNV of interest. GrabBlur facilitates the combination of phenotypic and NGS data (VCF files) via a local interface or command line operations. Data submissions may include HPO (Human Phenotype Ontology) terms, other trait descriptions, NGS technology information and the identity of the submitter. Most of this information is optional and its provision at the discretion of the submitter. Upon initial intake, GrabBlur merges and aggregates all sample-specific data. If a certain SNV is rare, the sample-specific information is replaced with the submitter identity. Generally, all data in GrabBlur are highly aggregated so that they can be shared with others while ensuring maximum privacy. Thus, it is impossible to reconstruct complete exomes or genomes from the database or to re-identify single individuals. After the individual information has been sufficiently "blurred", the data can be uploaded into a publicly accessible domain where aggregated genotypes are provided alongside phenotypic information. A web interface allows querying the database and the extraction of gene-wise SNV information. If an interesting SNV is found, the interrogator can get in contact with the submitter to exchange further information on the carrier and clarify, for example, whether the latter's phenotype matches with phenotype of their own patient.
Gigwa-Genotype investigator for genome-wide analyses.
Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre
2016-06-06
Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.
Re-thinking organisms: The impact of databases on model organism biology.
Leonelli, Sabina; Ankeny, Rachel A
2012-03-01
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species. Copyright © 2011 Elsevier Ltd. All rights reserved.
Long term volcanic hazard analysis in the Canary Islands
NASA Astrophysics Data System (ADS)
Becerril, L.; Galindo, I.; Laín, L.; Llorente, M.; Mancebo, M. J.
2009-04-01
Historic volcanism in Spain is restricted to the Canary Islands, a volcanic archipelago formed by seven volcanic islands. Several historic eruptions have been registered in the last five hundred years. However, and despite the huge amount of citizens and tourist in the archipelago, only a few volcanic hazard studies have been carried out. These studies are mainly focused in the developing of hazard maps in Lanzarote and Tenerife islands, especially for land use planning. The main handicap for these studies in the Canary Islands is the lack of well reported historical eruptions, but also the lack of data such as geochronological, geochemical or structural. In recent years, the use of Geographical Information Systems (GIS) and the improvement in the volcanic processes modelling has provided an important tool for volcanic hazard assessment. Although this sophisticated programs are really useful they need to be fed by a huge amount of data that sometimes, such in the case of the Canary Islands, are not available. For this reason, the Spanish Geological Survey (IGME) is developing a complete geo-referenced database for long term volcanic analysis in the Canary Islands. The Canarian Volcanic Hazard Database (HADA) is based on a GIS helping to organize and manage volcanic information efficiently. HADA includes the following groups of information: (1) 1:25.000 scale geologic maps, (2) 1:25.000 topographic maps, (3) geochronologic data, (4) geochemical data, (5) structural information, (6) climatic data. Data must pass a quality control before they are included in the database. New data are easily integrated in the database. With the HADA database the IGME has started a systematic organization of the existing data. In the near future, the IGME will generate new information to be included in HADA, such as volcanological maps of the islands, structural information, geochronological data and other information to assess long term volcanic hazard analysis. HADA will permit having enough quality information to map volcanic hazards and to run more reliable models of volcanic hazards, but in addition it aims to become a sharing system, improving communication between researchers, reducing redundant work and to be the reference for geological research in the Canary Islands.
Srinivas, T R; Taber, D J; Su, Z; Zhang, J; Mour, G; Northrup, D; Tripathi, A; Marsden, J E; Moran, W P; Mauldin, P D
2017-03-01
We sought proof of concept of a Big Data Solution incorporating longitudinal structured and unstructured patient-level data from electronic health records (EHR) to predict graft loss (GL) and mortality. For a quality improvement initiative, GL and mortality prediction models were constructed using baseline and follow-up data (0-90 days posttransplant; structured and unstructured for 1-year models; data up to 1 year for 3-year models) on adult solitary kidney transplant recipients transplanted during 2007-2015 as follows: Model 1: United Network for Organ Sharing (UNOS) data; Model 2: UNOS & Transplant Database (Tx Database) data; Model 3: UNOS, Tx Database & EHR comorbidity data; and Model 4: UNOS, Tx Database, EHR data, Posttransplant trajectory data, and unstructured data. A 10% 3-year GL rate was observed among 891 patients (2007-2015). Layering of data sources improved model performance; Model 1: area under the curve (AUC), 0.66; (95% confidence interval [CI]: 0.60, 0.72); Model 2: AUC, 0.68; (95% CI: 0.61-0.74); Model 3: AUC, 0.72; (95% CI: 0.66-077); Model 4: AUC, 0.84, (95 % CI: 0.79-0.89). One-year GL (AUC, 0.87; Model 4) and 3-year mortality (AUC, 0.84; Model 4) models performed similarly. A Big Data approach significantly adds efficacy to GL and mortality prediction models and is EHR deployable to optimize outcomes. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.
A critique of national solidarity in transnational organ sharing in Europe
Tretyakov, Konstantin
2018-01-01
Abstract In this article, I critically examine the principle of national solidarity in organ sharing across national borders. More specifically, I analyse the policy foundations of solidarity in the transnational allocation of organs and its implementation in the system of national balance points adopted in Europe. I argue that the system of national balance points is based on statist collectivism and therefore is oriented more toward collective, rather than individual welfare. The same collective welfare rationale is also evident from leading policy statements about self-sufficiency in organ donation that seem to assume that cross-border organ sharing can be wrong if collective welfare is violated. This collectivist system of organ sharing can produce unjust results to individual candidates for organ transplantation. I propose several measures to reform the existing solidarity-based framework for the procurement and allocation of organs in order to balance the collective and the individual welfare of the donors and recipients of organs. I also discuss the implications of adopting that proposal. PMID:29707215
A critique of national solidarity in transnational organ sharing in Europe.
Tretyakov, Konstantin
2018-05-01
In this article, I critically examine the principle of national solidarity in organ sharing across national borders. More specifically, I analyse the policy foundations of solidarity in the transnational allocation of organs and its implementation in the system of national balance points adopted in Europe. I argue that the system of national balance points is based on statist collectivism and therefore is oriented more toward collective, rather than individual welfare. The same collective welfare rationale is also evident from leading policy statements about self-sufficiency in organ donation that seem to assume that cross-border organ sharing can be wrong if collective welfare is violated. This collectivist system of organ sharing can produce unjust results to individual candidates for organ transplantation. I propose several measures to reform the existing solidarity-based framework for the procurement and allocation of organs in order to balance the collective and the individual welfare of the donors and recipients of organs. I also discuss the implications of adopting that proposal.
Interfaces Leading Groups of Learners to Make Their Shared Problem-Solving Organization Explicit
ERIC Educational Resources Information Center
Moguel, P.; Tchounikine, P.; Tricot, A.
2012-01-01
In this paper, we consider collective problem-solving challenges and a particular structuring objective: lead groups of learners to make their shared problem-solving organization explicit. Such an objective may be considered as a way to lead learners to consider building and maintaining a shared organization, and/or as a way to provide a basis for…
Kottke, Thomas E; Pronk, Nico; Zinkel, Andrew R; Isham, George J
2017-01-01
Health care organizations can magnify the impact of their community service and other philanthropic activities by implementing programs that create shared value. By definition, shared value is created when an initiative generates benefit for the sponsoring organization while also generating societal and community benefit. Because the programs generate benefit for the sponsoring organizations, the magnitude of any particular initiative is limited only by the market for the benefit and not the resources that are available for philanthropy. In this article we use three initiatives in sectors other than health care to illustrate the concept of shared value. We also present examples of five types of shared value programs that are sponsored by health care organizations: telehealth, worksite health promotion, school-based health centers, green and healthy housing, and clean and green health services. On the basis of the innovativeness of health care organizations that have already implemented programs that create shared value, we conclude that the opportunities for all health care organizations to create positive impact for individuals and communities through similar programs is large, and the limits have yet to be defined. PMID:28488982
CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database
Jia, Baofeng; Raphenya, Amogelang R.; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K.; Lago, Briony A.; Dave, Biren M.; Pereira, Sheldon; Sharma, Arjun N.; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E.; Frye, Jonathan G.; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L.; Pawlowski, Andrew C.; Johnson, Timothy A.; Brinkman, Fiona S.L.; Wright, Gerard D.; McArthur, Andrew G.
2017-01-01
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis. PMID:27789705
Povey, Sue; Al Aqeel, Aida I; Cambon-Thomsen, Anne; Dalgleish, Raymond; den Dunnen, Johan T; Firth, Helen V; Greenblatt, Marc S; Barash, Carol Isaacson; Parker, Michael; Patrinos, George P; Savige, Judith; Sobrido, Maria-Jesus; Winship, Ingrid; Cotton, Richard GH
2010-01-01
More than 1,000 Web-based locus-specific variation databases (LSDBs) are listed on the Website of the Human Genetic Variation Society (HGVS). These individual efforts, which often relate phenotype to genotype, are a valuable source of information for clinicians, patients, and their families, as well as for basic research. The initiators of the Human Variome Project recently recognized that having access to some of the immense resources of unpublished information already present in diagnostic laboratories would provide critical data to help manage genetic disorders. However, there are significant ethical issues involved in sharing these data worldwide. An international working group presents second-generation guidelines addressing ethical issues relating to the curation of human LSDBs that provide information via a Web-based interface. It is intended that these should help current and future curators and may also inform the future decisions of ethics committees and legislators. These guidelines have been reviewed by the Ethics Committee of the Human Genome Organization (HUGO). Hum Mutat 31:–6, 2010. © 2010 Wiley-Liss, Inc. PMID:20683926
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
The USA-NPN Information Management System: A tool in support of phenological assessments
NASA Astrophysics Data System (ADS)
Rosemartin, A.; Vazquez, R.; Wilson, B. E.; Denny, E. G.
2009-12-01
The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. Data management and information sharing are central to the USA-NPN mission. The USA-NPN develops, implements, and maintains a comprehensive Information Management System (IMS) to serve the needs of the network, including the collection, storage and dissemination of phenology data, access to phenology-related information, tools for data interpretation, and communication among partners of the USA-NPN. The IMS includes components for data storage, such as the National Phenology Database (NPD), and several online user interfaces to accommodate data entry, data download, data visualization and catalog searches for phenology-related information. The IMS is governed by a set of standards to ensure security, privacy, data access, and data quality. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data, to be flexible to the changing needs of the network, and to provide for quality control. The database stores phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), and provides for integration with legacy datasets. Several services will be created to provide access to the data, including reports, visualization interfaces, and web services. These services will provide integrated access to phenology and related information for scientists, decision-makers and general audiences. Phenological assessments at any scale will rely on secure and flexible information management systems for the organization and analysis of phenology data. The USA-NPN’s IMS can serve phenology assessments directly, through data management and indirectly as a model for large-scale integrated data management.
Innovation in organ transplantation: A meeting report.
Fishman, Jay A; Greenwald, Melissa
2018-05-09
This workshop targeted opportunities to stimulate transformative innovation in organ transplantation. Participants reached consensus regarding the following: (1) Mechanisms are needed to improve the coordination of policy and oversight activities, given overlapping responsibilities for transplantation and clinical investigation among federal agencies. Innovative clinical trials span traditional administrative boundaries and include stakeholders with diverse interests. Participants identified the need for a governmental interagency working group to coordinate nationwide transplant-related activities. (2) Improvements are required in clinical metrics for transplantation, with alignment of performance goals across transplantation organizations and any development of data requirements being consistent with those goals. Database coordination among clinical centers, organ procurement organizations, regulatory agencies, and payers would facilitate research and better inform policy. New data requirements should provide actionable insights into clinical performance. (3) Innovative research seen as potentially adversely affecting Program-Specific Reports may reduce centers' participation. Cutting-edge research requires mitigation of risk-aversive behaviors created by reporting of clinical outcomes data. Participants proposed a new review process in advance of implementation of clinical trials to guide "carve-outs" of transplant center outcomes data from Program-Specific Reports. Clinical transplantation will be advanced by the development of a shared and comprehensive research agenda to facilitate coordination of research and policy. © 2018 The American Society of Transplantation and the American Society of Transplant Surgeons.
LabKey Server: an open source platform for scientific data integration, analysis and collaboration.
Nelson, Elizabeth K; Piehler, Britt; Eckels, Josh; Rauch, Adam; Bellew, Matthew; Hussey, Peter; Ramsay, Sarah; Nathe, Cory; Lum, Karl; Krouse, Kevin; Stearns, David; Connolly, Brian; Skillman, Tom; Igra, Mark
2011-03-09
Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks roughly 27,000 assay runs, 860,000 specimen vials and 1,300,000 vial transfers. Sharing data, analysis tools and infrastructure can speed the efforts of large research consortia by enhancing efficiency and enabling new insights. The Atlas installation of LabKey Server demonstrates the utility of the LabKey platform for collaborative research. Stable, supported builds of LabKey Server are freely available for download at http://www.labkey.org. Documentation and source code are available under the Apache License 2.0.
LabKey Server: An open source platform for scientific data integration, analysis and collaboration
2011-01-01
Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks roughly 27,000 assay runs, 860,000 specimen vials and 1,300,000 vial transfers. Conclusions Sharing data, analysis tools and infrastructure can speed the efforts of large research consortia by enhancing efficiency and enabling new insights. The Atlas installation of LabKey Server demonstrates the utility of the LabKey platform for collaborative research. Stable, supported builds of LabKey Server are freely available for download at http://www.labkey.org. Documentation and source code are available under the Apache License 2.0. PMID:21385461
Patient/family views on data sharing in rare diseases: study in the European LeukoTreat project
Darquy, Sylviane; Moutel, Grégoire; Lapointe, Anne-Sophie; D'Audiffret, Diane; Champagnat, Julie; Guerroui, Samia; Vendeville, Marie-Louise; Boespflug-Tanguy, Odile; Duchange, Nathalie
2016-01-01
The purpose of this study was to explore patient and family views on the sharing of their medical data in the context of compiling a European leukodystrophies database. A survey questionnaire was delivered with help from referral centers and the European Leukodystrophies Association, and the questionnaires returned were both quantitatively and qualitatively analyzed. This study found that patients/families were strongly in favor of participating. Patients/families hold great hope and trust in the development of this type of research. They have a strong need for information and transparency on database governance, the conditions framing access to data, all research conducted, partnerships with the pharmaceutical industry, and they also need access to results. Our findings bring ethics-driven arguments for a process combining initial broad consent with ongoing information. On both, we propose key item-deliverables to database participants. PMID:26081642
Cnudde, Peter; Rolfson, Ola; Nemes, Szilard; Kärrholm, Johan; Rehnberg, Clas; Rogmark, Cecilia; Timperley, John; Garellick, Göran
2016-10-04
Sweden offers a unique opportunity to researchers to construct comprehensive databases that encompass a wide variety of healthcare related data. Statistics Sweden and the National Board of Health and Welfare collect individual level data for all Swedish residents that ranges from medical diagnoses to socioeconomic information. In addition to the information collected by governmental agencies the medical profession has initiated nationwide Quality Registers that collect data on specific diagnoses and interventions. The Quality Registers analyze activity within healthcare institutions, with the aims of improving clinical care and fostering clinical research. The Swedish Hip Arthroplasty Register (SHAR) has been collecting data since 1979. Joint replacement in general and hip replacement in particular is considered a success story with low mortality and complication rate. It is credited to the pioneering work of the SHAR that the revision rate following hip replacement surgery in Sweden is amongst the lowest in the world. This has been accomplished by the diligent follow-up of patients with feedback of outcomes to the providers of the healthcare along with post market surveillance of individual implant performance. During its existence SHAR has experienced a constant organic growth. One major development was the introduction of the Patient Reported Outcome Measures program, giving a voice to the patients in healthcare performance evaluation. The next aim for SHAR is to integrate patients' wishes and expectations with the surgeons' expertise in the form of a Shared Decision-Making (SDM) instrument. The first step in building such an instrument is to assemble the necessary data. This involves linking the SHARs database with the two aforementioned governmental agencies. The linkage is done by the 10-digit personal identity number assigned at birth (or immigration) for every Swedish resident. The anonymized data is stored on encrypted serves and can only be accessed after double identification. This data will serve as starting point for several research projects and clinical improvement work.
D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity
Kirby, Kathryn R.; Gray, Russell D.; Greenhill, Simon J.; Jordan, Fiona M.; Gomes-Ng, Stephanie; Bibiko, Hans-Jörg; Blasi, Damián E.; Botero, Carlos A.; Bowern, Claire; Ember, Carol R.; Leehr, Dan; Low, Bobbi S.; McCarter, Joe; Divale, William; Gavin, Michael C.
2016-01-01
From the foods we eat and the houses we construct, to our religious practices and political organization, to who we can marry and the types of games we teach our children, the diversity of cultural practices in the world is astounding. Yet, our ability to visualize and understand this diversity is limited by the ways it has been documented and shared: on a culture-by-culture basis, in locally-told stories or difficult-to-access repositories. In this paper we introduce D-PLACE, the Database of Places, Language, Culture, and Environment. This expandable and open-access database (accessible at https://d-place.org) brings together a dispersed corpus of information on the geography, language, culture, and environment of over 1400 human societies. We aim to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions. We detail how D-PLACE helps to overcome four common barriers to understanding these forces: i) location of relevant cultural data, (ii) linking data from distinct sources using diverse ethnonyms, (iii) variable time and place foci for data, and (iv) spatial and historical dependencies among cultural groups that present challenges for analysis. D-PLACE facilitates the visualisation of relationships among cultural groups and between people and their environments, with results downloadable as tables, on a map, or on a linguistic tree. We also describe how D-PLACE can be used for exploratory, predictive, and evolutionary analyses of cultural diversity by a range of users, from members of the worldwide public interested in contrasting their own cultural practices with those of other societies, to researchers using large-scale computational phylogenetic analyses to study cultural evolution. In summary, we hope that D-PLACE will enable new lines of investigation into the major drivers of cultural change and global patterns of cultural diversity. PMID:27391016
A framework for cross-observatory volcanological database management
NASA Astrophysics Data System (ADS)
Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio
2017-04-01
In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of spatial or time intervals to particular users or groups.
Analysis of disease-associated objects at the Rat Genome Database
Wang, Shur-Jen; Laulederkind, Stanley J. F.; Hayman, G. T.; Smith, Jennifer R.; Petri, Victoria; Lowry, Timothy F.; Nigam, Rajni; Dwinell, Melinda R.; Worthey, Elizabeth A.; Munzenmaier, Diane H.; Shimoyama, Mary; Jacob, Howard J.
2013-01-01
The Rat Genome Database (RGD) is the premier resource for genetic, genomic and phenotype data for the laboratory rat, Rattus norvegicus. In addition to organizing biological data from rats, the RGD team focuses on manual curation of gene–disease associations for rat, human and mouse. In this work, we have analyzed disease-associated strains, quantitative trait loci (QTL) and genes from rats. These disease objects form the basis for seven disease portals. Among disease portals, the cardiovascular disease and obesity/metabolic syndrome portals have the highest number of rat strains and QTL. These two portals share 398 rat QTL, and these shared QTL are highly concentrated on rat chromosomes 1 and 2. For disease-associated genes, we performed gene ontology (GO) enrichment analysis across portals using RatMine enrichment widgets. Fifteen GO terms, five from each GO aspect, were selected to profile enrichment patterns of each portal. Of the selected biological process (BP) terms, ‘regulation of programmed cell death’ was the top enriched term across all disease portals except in the obesity/metabolic syndrome portal where ‘lipid metabolic process’ was the most enriched term. ‘Cytosol’ and ‘nucleus’ were common cellular component (CC) annotations for disease genes, but only the cancer portal genes were highly enriched with ‘nucleus’ annotations. Similar enrichment patterns were observed in a parallel analysis using the DAVID functional annotation tool. The relationship between the preselected 15 GO terms and disease terms was examined reciprocally by retrieving rat genes annotated with these preselected terms. The individual GO term–annotated gene list showed enrichment in physiologically related diseases. For example, the ‘regulation of blood pressure’ genes were enriched with cardiovascular disease annotations, and the ‘lipid metabolic process’ genes with obesity annotations. Furthermore, we were able to enhance enrichment of neurological diseases by combining ‘G-protein coupled receptor binding’ annotated genes with ‘protein kinase binding’ annotated genes. Database URL: http://rgd.mcw.edu PMID:23794737
D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity.
Kirby, Kathryn R; Gray, Russell D; Greenhill, Simon J; Jordan, Fiona M; Gomes-Ng, Stephanie; Bibiko, Hans-Jörg; Blasi, Damián E; Botero, Carlos A; Bowern, Claire; Ember, Carol R; Leehr, Dan; Low, Bobbi S; McCarter, Joe; Divale, William; Gavin, Michael C
2016-01-01
From the foods we eat and the houses we construct, to our religious practices and political organization, to who we can marry and the types of games we teach our children, the diversity of cultural practices in the world is astounding. Yet, our ability to visualize and understand this diversity is limited by the ways it has been documented and shared: on a culture-by-culture basis, in locally-told stories or difficult-to-access repositories. In this paper we introduce D-PLACE, the Database of Places, Language, Culture, and Environment. This expandable and open-access database (accessible at https://d-place.org) brings together a dispersed corpus of information on the geography, language, culture, and environment of over 1400 human societies. We aim to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions. We detail how D-PLACE helps to overcome four common barriers to understanding these forces: i) location of relevant cultural data, (ii) linking data from distinct sources using diverse ethnonyms, (iii) variable time and place foci for data, and (iv) spatial and historical dependencies among cultural groups that present challenges for analysis. D-PLACE facilitates the visualisation of relationships among cultural groups and between people and their environments, with results downloadable as tables, on a map, or on a linguistic tree. We also describe how D-PLACE can be used for exploratory, predictive, and evolutionary analyses of cultural diversity by a range of users, from members of the worldwide public interested in contrasting their own cultural practices with those of other societies, to researchers using large-scale computational phylogenetic analyses to study cultural evolution. In summary, we hope that D-PLACE will enable new lines of investigation into the major drivers of cultural change and global patterns of cultural diversity.
ERIC Educational Resources Information Center
Peckover, Sue; Hall, Christopher; White, Sue
2009-01-01
A central element of the Every Child Matters reforms in England are measures which aim at improving information sharing. Amongst these are the children's database and the Common Assessment Framework, both representing technological solutions to long-standing concerns about information sharing in child welfare. This article reports some findings…
Existing and Emerging Technologies in Education: A Descriptive Overview. CREATE Monograph Series.
ERIC Educational Resources Information Center
Bakke, Thomas W.
Second in a series of six monographs on the use of new technologies in the instruction of learning disabled students, the paper offers a descriptive overview of new technologies. Topics addressed include the following: (1) techniques for sharing computer resources (including aspects of networking, sharing information through databases, and the use…
Information-Sharing Application Standards for Integrated Government Systems
2010-12-01
23 4. Federated Search and Role-Based Data Access ................ 24 G. LESSONS FROM HSIN...4. Federated Search and Role-Based Data Access One of the original purposes of HSIN was to facilitate information sharing...recent search paradigm, Federated Search , allows separate systems to feed external data requests without the need for a huge centralized database
Toward a Tiered Model to Share Clinical Trial Data and Samples in Precision Oncology.
Broes, Stefanie; Lacombe, Denis; Verlinden, Michiel; Huys, Isabelle
2018-01-01
The recent revolution in science and technology applied to medical research has left in its wake a trial of biomedical data and human samples; however, its opportunities remain largely unfulfilled due to a number of legal, ethical, financial, strategic, and technical barriers. Precision oncology has been at the vanguard to leverage this potential of "Big data" and samples into meaningful solutions for patients, considering the need for new drug development approaches in this area (due to high costs, late-stage failures, and the molecular diversity of cancer). To harness the potential of the vast quantities of data and samples currently fragmented across databases and biobanks, it is critical to engage all stakeholders and share data and samples across research institutes. Here, we identified two general types of sharing strategies. First, open access models, characterized by the absence of any review panel or decision maker, and second controlled access model where some form of control is exercised by either the donor (i.e., patient), the data provider (i.e., initial organization), or an independent party. Further, we theoretically describe and provide examples of nine different strategies focused on greater sharing of patient data and material. These models provide varying levels of control, access to various data and/or samples, and different types of relationship between the donor, data provider, and data requester. We propose a tiered model to share clinical data and samples that takes into account privacy issues and respects sponsors' legitimate interests. Its implementation would contribute to maximize the value of existing datasets, enabling unraveling the complexity of tumor biology, identify novel biomarkers, and re-direct treatment strategies better, ultimately to help patients with cancer.
Toward a Tiered Model to Share Clinical Trial Data and Samples in Precision Oncology
Broes, Stefanie; Lacombe, Denis; Verlinden, Michiel; Huys, Isabelle
2018-01-01
The recent revolution in science and technology applied to medical research has left in its wake a trial of biomedical data and human samples; however, its opportunities remain largely unfulfilled due to a number of legal, ethical, financial, strategic, and technical barriers. Precision oncology has been at the vanguard to leverage this potential of “Big data” and samples into meaningful solutions for patients, considering the need for new drug development approaches in this area (due to high costs, late-stage failures, and the molecular diversity of cancer). To harness the potential of the vast quantities of data and samples currently fragmented across databases and biobanks, it is critical to engage all stakeholders and share data and samples across research institutes. Here, we identified two general types of sharing strategies. First, open access models, characterized by the absence of any review panel or decision maker, and second controlled access model where some form of control is exercised by either the donor (i.e., patient), the data provider (i.e., initial organization), or an independent party. Further, we theoretically describe and provide examples of nine different strategies focused on greater sharing of patient data and material. These models provide varying levels of control, access to various data and/or samples, and different types of relationship between the donor, data provider, and data requester. We propose a tiered model to share clinical data and samples that takes into account privacy issues and respects sponsors’ legitimate interests. Its implementation would contribute to maximize the value of existing datasets, enabling unraveling the complexity of tumor biology, identify novel biomarkers, and re-direct treatment strategies better, ultimately to help patients with cancer. PMID:29435448
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) • Repository of national hazard models • Uniform global hazard model Armed with these tools and databases, stakeholders worldwide will then be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Earthquake hazard information will be able to be combined with data on exposure (buildings, population) and data on their vulnerability, for risk assessment around the globe. Furthermore, for a truly integrated view of seismic risk, users will be able to add social vulnerability and resilience indices and estimate the costs and benefits of different risk management measures. Having finished its first five-year Work Program at the end of 2013, GEM has entered into its second five-year Work Program 2014-2018. Beyond maintaining and enhancing the products developed in Work Program 1, the second phase will have a stronger focus on regional hazard and risk activities, and on seeing GEM products used for risk assessment and risk management practice at regional, national and local scales. Furthermore GEM intends to partner with similar initiatives underway for other natural perils, which together are needed to meet the need for advanced risk assessment methods, tools and data to underpin global disaster risk reduction efforts under the Hyogo Framework for Action #2 to be launched in Sendai/Japan in spring 2015
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
ERIC Educational Resources Information Center
Lynch, Clifford A.
1997-01-01
Union catalogs and distributed search systems are two ways users can locate materials in print and electronic formats. This article examines the advantages and limitations of both approaches and argues that they should be considered complementary rather than competitive. Discusses technologies creating linkage between catalogs and databases and…
Improving the Scalability of an Exact Approach for Frequent Item Set Hiding
ERIC Educational Resources Information Center
LaMacchia, Carolyn
2013-01-01
Technological advances have led to the generation of large databases of organizational data recognized as an information-rich, strategic asset for internal analysis and sharing with trading partners. Data mining techniques can discover patterns in large databases including relationships considered strategically relevant to the owner of the data.…
Building a Faculty Publications Database: A Case Study
ERIC Educational Resources Information Center
Tabaei, Sara; Schaffer, Yitzchak; McMurray, Gregory; Simon, Bashe
2013-01-01
This case study shares the experience of building an in-house faculty publications database that was spearheaded by the Touro College and University System library in 2010. The project began with the intention of contributing to the college by collecting the research accomplishments of our faculty and staff, thereby also increasing library…
One for All: Maintaining a Single Schedule Database for Large Development Projects
NASA Technical Reports Server (NTRS)
Hilscher, R.; Howerton, G.
1999-01-01
Efficiently maintaining and controlling a single schedule database in an Integrated Product Team environment is a significant challenge. It's accomplished effectively with the right combination of tools, skills, strategy, creativity, and teamwork. We'll share our lessons learned maintaining a 20,000 plus task network on a 36 month project.
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... Organizations; NYSE Arca, Inc.; Notice of Designation of Longer Period for Commission Action on Proceedings To Determine Whether To Approve or Disapprove a Proposed Rule Change To List and Trade Shares of the iShares...,\\2\\ a proposed rule change to list and trade shares of the iShares Copper Trust (``Trust'') pursuant...
Data Management Rubric for Video Data in Organismal Biology.
Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K
2017-07-01
Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology.
Data Management Rubric for Video Data in Organismal Biology
Brainerd, Elizabeth L.; Blob, Richard W.; Hedrick, Tyson L.; Creamer, Andrew T.; Müller, Ulrike K.
2017-01-01
Synopsis Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, “Establishing Standards for Video Data Management,” at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5–9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. PMID:28881939
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Share, share draft, and share certificate... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Share, share draft, and share certificate... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.
2009-12-01
How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database of ecological measurements in forests; we expect to extend the approach to a Quebec lake research network encompassing decades of lake measurements. In this session, we will describe and present four related components of the new system: GeoSearch’s globe-based searching and display of scientific data; prefuse-based visualization of social connections among members of a scientific research network; geolocation of research projects using Google Spreadsheets, KML, and Google Earth/Maps; and collaborative construction of a geolocated database of research articles. Each component is designed to have applications for scientists themselves as well as the general public. Although each implementation is in its infancy, we believe they could be useful to other researcher networks.
Scale and structure of capitated physician organizations in California.
Rosenthal, M B; Frank, R G; Buchanan, J L; Epstein, A M
2001-01-01
Physician organizations in California broke new ground in the 1980s by accepting capitated contracts and taking on utilization management functions. In this paper we present new data that document the scale, structure, and vertical affiliations of physician organizations that accept capitation in California. We provide information on capitated enrollment, the share of revenue derived by physician organizations from capitation contracts, and the scope of risk sharing with health maintenance organizations (HMOs). Capitation contracts and risk sharing dominate payment arrangements with HMOs. Physician organizations appear to have responded to capitation by affiliating with hospitals and management companies, adopting hybrid organizational structures, and consolidating into larger entities.
On leadership organizational intelligence/organizational stupidity: the leader's challenge.
Kerfoot, Karlene
2003-01-01
Creating organizations with a high IQ or creating organizations without the necessary intelligence guarantees success or failure of the organization. Without structures such as shared leadership and other forms of participative management, the organization or unit cannot access and use the available information and wisdom in the organization. When nurses and other health care professionals do not feel like they have a shared stake and do not feel like citizens of the organization, they lack passion for the organization's work. When nurses feel a sense of share ownnership and autonomy for the clinical practice, terrific outcomes are achieved. Leaders must accept the challenge to build the infrastructure that leads to excellence in organizational IQ.
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
An international aerospace information system - A cooperative opportunity
NASA Technical Reports Server (NTRS)
Blados, Walter R.; Cotter, Gladys A.
1992-01-01
This paper presents for consideration new possibilities for uniting the various aerospace database efforts toward a cooperative international aerospace database initiative that can optimize the cost-benefit equation for all members. The development of astronautics and aeronautics in individual nations has led to initiatives for national aerospace databases. Technological developments in information technology and science, as well as the reality of scarce resources, makes it necessary to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing.
Data Sharing For Precision Medicine: Policy Lessons And Future Directions.
Blasimme, Alessandro; Fadda, Marta; Schneider, Manuel; Vayena, Effy
2018-05-01
Data sharing is a precondition of precision medicine. Numerous organizations have produced abundant guidance on data sharing. Despite such efforts, data are not being shared to a degree that can trigger the expected data-driven revolution in precision medicine. We set out to explore why. Here we report the results of a comprehensive analysis of data-sharing guidelines issued over the past two decades by multiple organizations. We found that the guidelines overlap on a restricted set of policy themes. However, we observed substantial fragmentation in the policy landscape across specific organizations and data types. This may have contributed to the current stalemate in data sharing. To move toward a more efficient data-sharing ecosystem for precision medicine, policy makers should explore innovative ways to cope with central policy themes such as privacy, consent, and data quality; focus guidance on interoperability, attribution, and public engagement; and promote data-sharing policies that can be adapted to multiple data types.
Community Organizing for Database Trial Buy-In by Patrons
ERIC Educational Resources Information Center
Pionke, J. J.
2015-01-01
Database trials do not often garner a lot of feedback. Using community-organizing techniques can not only potentially increase the amount of feedback received but also deepen the relationship between the librarian and his or her constituent group. This is a case study of the use of community-organizing techniques in a series of database trials for…
ERIC Educational Resources Information Center
Griffiths, Jose-Marie; And Others
This document contains validated activities and competencies needed by librarians working in a database distributor/service organization. The activities of professionals working in database distributor/service organizations are listed by function: Database Processing; Customer Support; System Administration; and Planning. The competencies are…
The purpose of this SOP is to describe the database storage organization, as well as describe the sources of data for each database used during the Arizona NHEXAS project and the "Border" study. Keywords: data; database; organization.
The National Human Exposure Assessment Sur...
Beta Coefficient and Market Share: Downloading and Processing Data from DIALOG to LOTUS 1-2-3.
ERIC Educational Resources Information Center
Popovich, Charles J.
This article briefly describes the topics "beta coefficient"--a measurement of the price volatility of a company's stock in relationship to the overall stock market--and "market share"--an average measurement for the overall stock market based on a specified group of stocks. It then selectively recommends a database (file) on…
Callender, C O; Koizumi, N; Miles, P V; Melancon, J K
2016-09-01
The purpose was to review the increase of minority organ donation. The methodology was based on the efforts of the DC Organ Donor Program and the Dow Take Initiative Program that focused on increasing donors among African Americans (AAs). From 1982 to 1988, AA donor card signings increased from 20/month to 750/month, and Black donations doubled. A review of the data, including face-to-face grassroots presentations combined with national media, was conducted. Gallup polls in 1985 and 1990 indicated a tripling of black awareness of transplantation and the number of blacks signing donor cards. Based on the applied successful methodologies, in 1991, the National Minority Organ Tissues Transplant Education Program was established targeting AA, Hispanic, Asian, and other ethnic groups. A review of the United Network for Organ Sharing (UNOS) database from 1990 to 2010 was accomplished. Nationally, ethnic minority organ donors per million (ODM) increased from 8-10 ODM (1982) to 35 ODM (AA and Latino/Hispanics) in 2002. In 1995, ODMs were white 34.2, black 33.1, Hispanic 31.5, and Asian 17.9. In 2010, Black organ donors per million totaled 35.36 versus white 27.07, Hispanic 25.59, and Asian 14.70. Based on the data retrieved from UNOS in 2010, blacks were ranked above whites and other ethnic minority populations as the number one ethnic group of organ donors per million in the US. Copyright © 2016 Elsevier Inc. All rights reserved.
42 CFR 480.143 - QIO involvement in shared health data systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...
42 CFR 480.143 - QIO involvement in shared health data systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...
42 CFR 480.143 - QIO involvement in shared health data systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... Form (OMB Form No. 0917-0034). Need and Use of Information Collection: The IHS goal is to raise the... Prevention (HP/DP), Nursing, and Dental) have developed a centralized program database of Best/Promising Practices and Local Efforts (BPPPLE) and resources. The purpose of this collection is to develop a database...
AtlasT4SS: a curated database for type IV secretion systems.
Souza, Rangel C; del Rosario Quispe Saji, Guadalupe; Costa, Maiana O C; Netto, Diogo S; Lima, Nicholas C B; Klein, Cecília C; Vasconcelos, Ana Tereza R; Nicolás, Marisa F
2012-08-09
The type IV secretion system (T4SS) can be classified as a large family of macromolecule transporter systems, divided into three recognized sub-families, according to the well-known functions. The major sub-family is the conjugation system, which allows transfer of genetic material, such as a nucleoprotein, via cell contact among bacteria. Also, the conjugation system can transfer genetic material from bacteria to eukaryotic cells; such is the case with the T-DNA transfer of Agrobacterium tumefaciens to host plant cells. The system of effector protein transport constitutes the second sub-family, and the third one corresponds to the DNA uptake/release system. Genome analyses have revealed numerous T4SS in Bacteria and Archaea. The purpose of this work was to organize, classify, and integrate the T4SS data into a single database, called AtlasT4SS - the first public database devoted exclusively to this prokaryotic secretion system. The AtlasT4SS is a manual curated database that describes a large number of proteins related to the type IV secretion system reported so far in Gram-negative and Gram-positive bacteria, as well as in Archaea. The database was created using the RDBMS MySQL and the Catalyst Framework based in the Perl programming language and using the Model-View-Controller (MVC) design pattern for Web. The current version holds a comprehensive collection of 1,617 T4SS proteins from 58 Bacteria (49 Gram-negative and 9 Gram-Positive), one Archaea and 11 plasmids. By applying the bi-directional best hit (BBH) relationship in pairwise genome comparison, it was possible to obtain a core set of 134 clusters of orthologous genes encoding T4SS proteins. In our database we present one way of classifying orthologous groups of T4SSs in a hierarchical classification scheme with three levels. The first level comprises four classes that are based on the organization of genetic determinants, shared homologies, and evolutionary relationships: (i) F-T4SS, (ii) P-T4SS, (iii) I-T4SS, and (iv) GI-T4SS. The second level designates a specific well-known protein families otherwise an uncharacterized protein family. Finally, in the third level, each protein of an ortholog cluster is classified according to its involvement in a specific cellular process. AtlasT4SS database is open access and is available at http://www.t4ss.lncc.br.
A community effort to protect genomic data sharing, collaboration and outsourcing.
Wang, Shuang; Jiang, Xiaoqian; Tang, Haixu; Wang, Xiaofeng; Bu, Diyue; Carey, Knox; Dyke, Stephanie Om; Fox, Dov; Jiang, Chao; Lauter, Kristin; Malin, Bradley; Sofia, Heidi; Telenti, Amalio; Wang, Lei; Wang, Wenhao; Ohno-Machado, Lucila
2017-01-01
The human genome can reveal sensitive information and is potentially re-identifiable, which raises privacy and security concerns about sharing such data on wide scales. In 2016, we organized the third Critical Assessment of Data Privacy and Protection competition as a community effort to bring together biomedical informaticists, computer privacy and security researchers, and scholars in ethical, legal, and social implications (ELSI) to assess the latest advances on privacy-preserving techniques for protecting human genomic data. Teams were asked to develop novel protection methods for emerging genome privacy challenges in three scenarios: Track (1) data sharing through the Beacon service of the Global Alliance for Genomics and Health. Track (2) collaborative discovery of similar genomes between two institutions; and Track (3) data outsourcing to public cloud services. The latter two tracks represent continuing themes from our 2015 competition, while the former was new and a response to a recently established vulnerability. The winning strategy for Track 1 mitigated the privacy risk by hiding approximately 11% of the variation in the database while permitting around 160,000 queries, a significant improvement over the baseline. The winning strategies in Tracks 2 and 3 showed significant progress over the previous competition by achieving multiple orders of magnitude performance improvement in terms of computational runtime and memory requirements. The outcomes suggest that applying highly optimized privacy-preserving and secure computation techniques to safeguard genomic data sharing and analysis is useful. However, the results also indicate that further efforts are needed to refine these techniques into practical solutions.
ERIC Educational Resources Information Center
Yang, Tung-Mou
2011-01-01
Information sharing and integration has long been considered an important approach for increasing organizational efficiency and performance. With advancements in information and communication technologies, sharing and integrating information across organizations becomes more attractive and practical to organizations. However, achieving…
PubChem BioAssay: A Decade's Development toward Open High-Throughput Screening Data Sharing.
Wang, Yanli; Cheng, Tiejun; Bryant, Stephen H
2017-07-01
High-throughput screening (HTS) is now routinely conducted for drug discovery by both pharmaceutical companies and screening centers at academic institutions and universities. Rapid advance in assay development, robot automation, and computer technology has led to the generation of terabytes of data in screening laboratories. Despite the technology development toward HTS productivity, fewer efforts were devoted to HTS data integration and sharing. As a result, the huge amount of HTS data was rarely made available to the public. To fill this gap, the PubChem BioAssay database ( https://www.ncbi.nlm.nih.gov/pcassay/ ) was set up in 2004 to provide open access to the screening results tested on chemicals and RNAi reagents. With more than 10 years' development and contributions from the community, PubChem has now become the largest public repository for chemical structures and biological data, which provides an information platform to worldwide researchers supporting drug development, medicinal chemistry study, and chemical biology research. This work presents a review of the HTS data content in the PubChem BioAssay database and the progress of data deposition to stimulate knowledge discovery and data sharing. It also provides a description of the database's data standard and basic utilities facilitating information access and use for new users.
42 CFR 480.143 - QIO involvement in shared health data systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...
42 CFR 480.143 - QIO involvement in shared health data systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...
Coping with Prescription Drug Cost Sharing: Knowledge, Adherence, and Financial Burden
Reed, Mary; Brand, Richard; Newhouse, Joseph P; Selby, Joe V; Hsu, John
2008-01-01
Objective Assess patient knowledge of and response to drug cost sharing. Study Setting Adult members of a large prepaid, integrated delivery system. Study Design/Data Collection Telephone interviews with 932 participants (72 percent response rate) who reported knowledge of the structures and amounts of their prescription drug cost sharing. Participants reported cost-related changes in their drug adherence, any financial burden, and other cost-coping behaviors. Actual cost sharing amounts came from administrative databases. Principal Findings Overall, 27 percent of patients knew all of their drug cost sharing structures and amounts. After adjustment for individual characteristics, additional patient cost sharing structures (tiers and caps), and higher copayment amounts were associated with reporting decreased adherence, financial burden, or other cost-coping behaviors. Conclusions Patient knowledge of their drug benefits is limited, especially for more complex cost sharing structures. Patients also report a range of responses to greater cost sharing, including decreasing adherence. PMID:18370979
2001-10-25
within one of the programmes sponsored by the European Commission.The system mainly consists of a shared care database in which each groups of...care database in which each community facility, or group of facilities, is supported by a local area network (LAN). Each of these LANs is connected over...functions. The software is layered, so that the client application is not affected by how the servers are implemented or which database system they use
Data sharing system for lithography APC
NASA Astrophysics Data System (ADS)
Kawamura, Eiichi; Teranishi, Yoshiharu; Shimabara, Masanori
2007-03-01
We have developed a simple and cost-effective data sharing system between fabs for lithography advanced process control (APC). Lithography APC requires process flow, inter-layer information, history information, mask information and so on. So, inter-APC data sharing system has become necessary when lots are to be processed in multiple fabs (usually two fabs). The development cost and maintenance cost also have to be taken into account. The system handles minimum information necessary to make trend prediction for the lots. Three types of data have to be shared for precise trend prediction. First one is device information of the lots, e.g., process flow of the device and inter-layer information. Second one is mask information from mask suppliers, e.g., pattern characteristics and pattern widths. Last one is history data of the lots. Device information is electronic file and easy to handle. The electronic file is common between APCs and uploaded into the database. As for mask information sharing, mask information described in common format is obtained via Wide Area Network (WAN) from mask-vender will be stored in the mask-information data server. This information is periodically transferred to one specific lithography-APC server and compiled into the database. This lithography-APC server periodically delivers the mask-information to every other lithography-APC server. Process-history data sharing system mainly consists of function of delivering process-history data. In shipping production lots to another fab, the product-related process-history data is delivered by the lithography-APC server from the shipping site. We have confirmed the function and effectiveness of data sharing systems.
Education and training column: the learning collaborative.
MacDonald-Wilson, Kim L; Nemec, Patricia B
2015-03-01
This column describes the key components of a learning collaborative, with examples from the experience of 1 organization. A learning collaborative is a method for management, learning, and improvement of products or processes, and is a useful approach to implementation of a new service design or approach. This description draws from published material on learning collaboratives and the authors' experiences. The learning collaborative approach offers an effective method to improve service provider skills, provide support, and structure environments to result in lasting change for people using behavioral health services. This approach is consistent with psychiatric rehabilitation principles and practices, and serves to increase the overall capacity of the mental health system by structuring a process for discovering and sharing knowledge and expertise across provider agencies. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Sterckx, Sigrid; Cockbain, Julian; Howard, Heidi; Huys, Isabelle; Borry, Pascal
2013-05-01
Recently, 23andMe announced that it had obtained its first patent, related to "polymorphisms associated with Parkinson's disease" (US-B-8187811). This announcement immediately sparked controversy in the community of 23andMe users and research participants, especially with regard to issues of transparency and trust. The purpose of this article was to analyze the patent portfolio of this prominent direct-to-consumer genetic testing company and discuss the potential ethical implications of patenting in this field for public participation in Web-based genetic research. We searched the publicly accessible patent database Espacenet as well as the commercially available database Micropatent for published patents and patent applications of 23andMe. Six patent families were identified for 23andMe. These included patent applications related to: genetic comparisons between grandparents and grandchildren, family inheritance, genome sharing, processing data from genotyping chips, gamete donor selection based on genetic calculations, finding relatives in a database, and polymorphisms associated with Parkinson disease. An important lesson to be drawn from this ongoing controversy seems to be that any (private or public) organization involved in research that relies on human participation, whether by providing information, body material, or both, needs to be transparent, not only about its research goals but also about its strategies and policies regarding commercialization.
Ramirez-Gonzalez, Ricardo; Caccamo, Mario; MacLean, Daniel
2011-10-01
Scientists now use high-throughput sequencing technologies and short-read assembly methods to create draft genome assemblies in just days. Tools and pipelines like the assembler, and the workflow management environments make it easy for a non-specialist to implement complicated pipelines to produce genome assemblies and annotations very quickly. Such accessibility results in a proliferation of assemblies and associated files, often for many organisms. These assemblies get used as a working reference by lots of different workers, from a bioinformatician doing gene prediction or a bench scientist designing primers for PCR. Here we describe Gee Fu, a database tool for genomic assembly and feature data, including next-generation sequence alignments. Gee Fu is an instance of a Ruby-On-Rails web application on a feature database that provides web and console interfaces for input, visualization of feature data via AnnoJ, access to data through a web-service interface, an API for direct data access by Ruby scripts and access to feature data stored in BAM files. Gee Fu provides a platform for storing and sharing different versions of an assembly and associated features that can be accessed and updated by bench biologists and bioinformaticians in ways that are easy and useful for each. http://tinyurl.com/geefu dan.maclean@tsl.ac.uk.
Kim, Su Ran; Lee, Hye Won; Jun, Ji Hee; Ko, Byoung-Seob
2017-03-01
Gan Mai Da Zao (GMDZ) decoction is widely used for the treatment of various diseases of the internal organ and of the central nervous system. The aim of this study is to investigate the effects of GMDZ decoction on neuropsychiatric disorders in an animal model. We searched seven databases for randomized animal studies published until April 2015: Pubmed, four Korean databases (DBpia, Oriental Medicine Advanced Searching Integrated System, Korean Studies Information Service System, and Research Information Sharing Service), and one Chinese database (China National Knowledge Infrastructure). The randomized animal studies were included if the effects of GMDZ decoction were tested on neuropsychiatric disorders. All articles were read in full and extracted predefined criteria by two independent reviewers. From a total of 258 hits, six randomized controlled animal studies were included. Five studies used a Sprague Dawley rat model for acute psychological stress, post-traumatic stress disorders, and unpredictable mild stress depression whereas one study used a Kunming mouse model for prenatal depression. The results of the studies showed that GMDZ decoction improved the related outcomes. Regardless of the dose and concentration used, GMDZ decoction significantly improved neuropsychiatric disease-related outcomes in animal models. However, additional systematic and extensive studies should be conducted to establish a strong conclusion.
Pleurochrysome: A Web Database of Pleurochrysis Transcripts and Orthologs Among Heterogeneous Algae
Fujiwara, Shoko; Takatsuka, Yukiko; Hirokawa, Yasutaka; Tsuzuki, Mikio; Takano, Tomoyuki; Kobayashi, Masaaki; Suda, Kunihiro; Asamizu, Erika; Yokoyama, Koji; Shibata, Daisuke; Tabata, Satoshi; Yano, Kentaro
2016-01-01
Pleurochrysis is a coccolithophorid genus, which belongs to the Coccolithales in the Haptophyta. The genus has been used extensively for biological research, together with Emiliania in the Isochrysidales, to understand distinctive features between the two coccolithophorid-including orders. However, molecular biological research on Pleurochrysis such as elucidation of the molecular mechanism behind coccolith formation has not made great progress at least in part because of lack of comprehensive gene information. To provide such information to the research community, we built an open web database, the Pleurochrysome (http://bioinf.mind.meiji.ac.jp/phapt/), which currently stores 9,023 unique gene sequences (designated as UNIGENEs) assembled from expressed sequence tag sequences of P. haptonemofera as core information. The UNIGENEs were annotated with gene sequences sharing significant homology, conserved domains, Gene Ontology, KEGG Orthology, predicted subcellular localization, open reading frames and orthologous relationship with genes of 10 other algal species, a cyanobacterium and the yeast Saccharomyces cerevisiae. This sequence and annotation information can be easily accessed via several search functions. Besides fundamental functions such as BLAST and keyword searches, this database also offers search functions to explore orthologous genes in the 12 organisms and to seek novel genes. The Pleurochrysome will promote molecular biological and phylogenetic research on coccolithophorids and other haptophytes by helping scientists mine data from the primary transcriptome of P. haptonemofera. PMID:26746174
Population-Based Analysis and Projections of Liver Supply Under Redistricting.
Parikh, Neehar D; Marrero, Wesley J; Sonnenday, Christopher J; Lok, Anna S; Hutton, David W; Lavieri, Mariel S
2017-09-01
To reduce the geographic heterogeneity in liver transplant allocation, the United Network of Organ Sharing has proposed redistricting, which is impacted by both donor supply and liver transplantation demand. We aimed to determine the impact of demographic changes on the redistricting proposal and characterize causes behind geographic heterogeneity in donor supply. We analyzed adult donors from 2002 to 2014 from the United Network of Organ Sharing database and calculated regional liver donation and utilization stratified by age, race, and body mass index. We used US population data to make regional projections of available donors from 2016 to 2025, incorporating the proposed 8-region redistricting plan. We used donors/100 000 population age 18 to 84 years (D/100K) as a measure of equity. We calculated a coefficient of variation (standard deviation/mean) for each regional model. We performed an exploratory analysis where we used national rates of donation, utilization and both for each regional model. The overall projected D/100K will decrease from 2.53 to 2.49 from 2016 to 2025. The coefficient of variation in 2016 is expected to be 20.3% in the 11-region model and 13.2% in the 8-region model. We found that standardizing regional donation and utilization rates would reduce geographic heterogeneity to 4.9% in the 8-region model and 4.6% in the 11-region model. The 8-region allocation model will reduce geographic variation in donor supply to a significant extent; however, we project that geographic disparity will marginally increase over time. Though challenging, interventions to better standardize donation and utilization rates would be impactful in reducing geographic heterogeneity in organ supply.
OLYMPUS DISS - A Readily Implemented Geographic Data and Information Sharing System
NASA Astrophysics Data System (ADS)
Necsoiu, D. M.; Winfrey, B.; Murphy, K.; McKague, H. L.
2002-12-01
Electronic information technology has become a crucial component of business, government, and scientific organizations. In this technology era, many enterprises are moving away from the perception that information repositories are only a tool for decision-making. Instead, many organizations are learning that information systems, which are capable of organizing and following the interrelations between information and both the short-term and strategic organizational goals, are assets themselves, with inherent value. Olympus Data and Information Sharing System (DISS) is a system developed at the Center for Nuclear Waste Regulatory Analyses (CNWRA) to solve several difficult tasks associated with the management of geographical, geological and geophysical data. Three of the tasks were to (1) gather the large amount of heterogeneous information that has accumulated over the operational lifespan of CNWRA, (2) store the data in a central, knowledge-based, searchable database and (3) create quick, easy, convenient, and reliable access to that information. Faced with these difficult tasks CNWRA identified the requirements for designing such a system. Key design criteria were: (a) ability to ingest different data formats (i.e., raster, vector, and tabular data); (b) minimal expense using open-source and commercial off-the-shelf software; (c) seamless management of geospatial data, freeing up time for researchers to focus on analyses or algorithm development, rather than on time consuming format conversions; (d) controlled access; and (e) scalable architecture to meet new and continuing demands. Olympus DISS is a solution that can be easily adapted to small and mid-size enterprises dealing with heterogeneous geographic data. It uses established data standards, provides a flexible mechanism to build applications upon and output geographic data in multiple and clear ways. This abstract is an independent product of the CNWRA and does not necessarily reflect the views or regulatory position of the Nuclear Regulatory Commission.
A Taxonomic Search Engine: Federating taxonomic databases using web services
Page, Roderic DM
2005-01-01
Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. Conclusion The Taxonomic Search Engine is available at and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names. PMID:15757517
The dynamics of shared leadership: building trust and enhancing performance.
Drescher, Marcus A; Korsgaard, M Audrey; Welpe, Isabell M; Picot, Arnold; Wigand, Rolf T
2014-09-01
In this study, we examined how the dynamics of shared leadership are related to group performance. We propose that, over time, the expansion of shared leadership within groups is related to growth in group trust. In turn, growth in group trust is related to performance improvement. Longitudinal data from 142 groups engaged in a strategic simulation game over a 4-month period provide support for positive changes in trust mediating the relationship between positive changes in shared leadership and positive changes in performance. Our findings contribute to the literature on shared leadership and group dynamics by demonstrating how the growth in shared leadership contributes to the emergence of trust and a positive performance trend over time. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Kim, Seckyoung Loretta; Yun, Seokhwa
2015-03-01
Considering the importance of coworkers and knowledge sharing in current business environment, this study intends to advance understanding by investigating the effect of coworker knowledge sharing on focal employees' task performance. Furthermore, by taking an interactional perspective, this study examines the boundary conditions of coworker knowledge sharing on task performance. Data from 149 samples indicate that there is a positive relationship between coworker knowledge sharing and task performance, and this relationship is strengthened when general self-efficacy or abusive supervision is low rather than high. Our findings suggest that the recipients' characteristics and leaders' behaviors could be important contingent factors that limit the effect of coworker knowledge sharing on task performance. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Organization and dissemination of multimedia medical databases on the WWW.
Todorovski, L; Ribaric, S; Dimec, J; Hudomalj, E; Lunder, T
1999-01-01
In the paper, we focus on the problem of building and disseminating multimedia medical databases on the World Wide Web (WWW). The current results of the ongoing project of building a prototype dermatology images database and its WWW presentation are presented. The dermatology database is part of an ambitious plan concerning an organization of a network of medical institutions building distributed and federated multimedia databases of a much wider scale.
Software Engineering Laboratory (SEL) database organization and user's guide, revision 2
NASA Technical Reports Server (NTRS)
Morusiewicz, Linda; Bristow, John
1992-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.
Software Engineering Laboratory (SEL) database organization and user's guide
NASA Technical Reports Server (NTRS)
So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas
1989-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.
International energy: Research organizations, 1988--1992. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, P.; Jordan, S.
This publication contains the standardized names of energy research organizations used in energy information databases. Involved in this cooperative task are (1) the technical staff of the US DOE Office of Scientific and Technical Information (OSTI) in cooperation with the member countries of the Energy Technology Data Exchange (ETDE) and (2) the International Nuclear Information System (INIS). ETDE member countries are also members of the International Nuclear Information System (INIS). Nuclear organization names recorded for INIS by these ETDE member countries are also included in the ETDE Energy Database. Therefore, these organization names are cooperatively standardized for use in bothmore » information systems. This publication identifies current organizations doing research in all energy fields, standardizes the format for recording these organization names in bibliographic citations, assigns a numeric code to facilitate data entry, and identifies report number prefixes assigned by these organizations. These research organization names may be used in searching the databases ``Energy Science & Technology`` on DIALOG and ``Energy`` on STN International. These organization names are also used in USDOE databases on the Integrated Technical Information System. Research organizations active in the past five years, as indicated by database records, were identified to form this publication. This directory includes approximately 31,000 organizations that reported energy-related literature from 1988 to 1992 and updates the DOE Energy Data Base: Corporate Author Entries.« less
OperomeDB: A Database of Condition-Specific Transcription Units in Prokaryotic Genomes.
Chetal, Kashish; Janga, Sarath Chandra
2015-01-01
Background. In prokaryotic organisms, a substantial fraction of adjacent genes are organized into operons-codirectionally organized genes in prokaryotic genomes with the presence of a common promoter and terminator. Although several available operon databases provide information with varying levels of reliability, very few resources provide experimentally supported results. Therefore, we believe that the biological community could benefit from having a new operon prediction database with operons predicted using next-generation RNA-seq datasets. Description. We present operomeDB, a database which provides an ensemble of all the predicted operons for bacterial genomes using available RNA-sequencing datasets across a wide range of experimental conditions. Although several studies have recently confirmed that prokaryotic operon structure is dynamic with significant alterations across environmental and experimental conditions, there are no comprehensive databases for studying such variations across prokaryotic transcriptomes. Currently our database contains nine bacterial organisms and 168 transcriptomes for which we predicted operons. User interface is simple and easy to use, in terms of visualization, downloading, and querying of data. In addition, because of its ability to load custom datasets, users can also compare their datasets with publicly available transcriptomic data of an organism. Conclusion. OperomeDB as a database should not only aid experimental groups working on transcriptome analysis of specific organisms but also enable studies related to computational and comparative operomics.
Magnetta, Michael J; Xing, Minzhi; Zhang, Di; Kim, Hyun S
2016-12-01
To investigate socioeconomic and demographic factors associated with transplantation outcomes in patients with hepatocellular carcinoma (HCC) treated with bridging locoregional therapy (LRT) before orthotopic liver transplantation (OLT). The United Network for Organ Sharing (UNOS) database was used to identify all patients in the United States with HCC who were listed for OLT between 2002 and 2013. Mean overall survival (OS) after OLT was stratified based on age, sex, ethnicity, transplant year, region, and insurance status. Kaplan-Meier estimation was used for survival analysis with log-rank test and Cox proportional hazards model to assess independent prognostic factors for OS. Of the 17,291 listed patients with HCC, 14,511 underwent OLT. Mean age was 57.4 years (76.8% male). Favorable sociodemographic factors were associated with increased rates of bridging LRT before OLT and longer wait time on the transplant list and were shown to be independent prognostic factors for prolonged OS after OLT using multivariate analysis. Favorable demographic factors included patient age < 60 years, donor age < 45 years, year of diagnosis between 2008 and 2013, UNOS regions 4 and 5, Asian ethnicity, high functional status, postgraduate education, private payer insurance, and employment at the time of OLT. Patients with favorable sociodemographics had higher rates of LRT before OLT performed for HCC cure. These patients had longer transplant wait times and longer OS after OLT. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
Organ donation and transplantation-the Chennai experience in India.
Shroff, S; Rao, S; Kurian, G; Suresh, S
2007-04-01
Tamil Nadu has been at the forefront of medical care in the country. It was the first state in the country that started a living kidney transplant program. It is also the first state to successfully start the cadaver programme after the passing of the "Transplantation of Human Organ Act" of 1994 and in the last 5 years has formed a network between hospitals for organ sharing. From the year 2000 to 2006 an organ sharing network was started in Tamil Nadu and the facilitator of this programme has been a non-government organization called MOHAN (acronym for Multi Organ Harvesting Aid Network) Foundation. The organs shared during the period number over 460 organs in two regions (both Tamil Nadu and Hyderabad). In Tamil Nadu the shared organs have included 166 Kidneys, 24 livers, 6 hearts, and 180 eyes. In 2003 sharing network was initiated by MOHAN in Hyderabad and to some extent the Tamil Nadu model was duplicated. with some success and 96 cadaver organs have been transplanted in the last 3 years. There are many advantages of organ sharing including the cost economics. At present there is a large pool of brain dead patients who could become potential organ donors in the major cities in India. Their organs are not being utilized for various support logistics. A multi-pronged strategy is required for the long term success of this program. These years in Tamil Nadu have been the years of learning, un-learning and relearning and the program today has matured slowly into what can perhaps be evolved as an Indian model. In all these years there have been various difficulties in its implementation and some of the key elements for the success of the program is the need to educate our own medical fraternity and seek their cooperation. The program requires trained counselors to be able to work in the intensive cares. The government's support is pivotal if this program to provide benefit to the common man. MOHAN Foundation has accumulated considerable experience to be able to evolve a model to take this program to the national level and more so as it recently has been granted 100% tax exemption on all donations to form a countrywide network for organ sharing.
Managing Information Sharing within an Organizational Setting: A Social Network Perspective
ERIC Educational Resources Information Center
Hatala, John-Paul; Lutta, Joseph George
2009-01-01
Information sharing is critical to an organization's competitiveness and requires a free flow of information among members if the organization is to remain competitive. A review of the literature on organizational structure and information sharing was conducted to examine the research in this area. A case example illustrates how a social network…
ERIC Educational Resources Information Center
Shoemaker, Nikki
2014-01-01
Both practitioners and researchers recognize the increasing importance of knowledge sharing in organizations (Bock, Zmud, Kim, & Lee, 2005; Vera-Muñoz, Ho, & Chow, 2006). Knowledge sharing influences a firm's knowledge creation, organizational learning, performance achievement, growth, and competitive advantage (Bartol &…
Levin, Lia; Schwartz-Tayri, Talia
2017-06-01
Partnerships between service users and social workers are complex in nature and can be driven by both personal and contextual circumstances. This study sought to explore the relationship between social workers' involvement in shared decision making with service users, their attitudes towards service users in poverty, moral standards and health and social care organizations' policies towards shared decision making. Based on the responses of 225 licensed social workers from health and social care agencies in the public, private and third sectors in Israel, path analysis was used to test a hypothesized model. Structural attributions for poverty contributed to attitudes towards people who live in poverty, which led to shared decision making. Also, organizational support in shared decision making, and professional moral identity, contributed to ethical behaviour which led to shared decision making. The results of this analysis revealed that shared decision making may be a scion of branched roots planted in the relationship between ethics, organizations and Stigma. © 2016 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Westra, Daan; Angeli, Federica; Carree, Martin; Ruwaard, Dirk
2017-08-01
Cooperative inter-organizational relations are salient to healthcare delivery. However, they do not match with the pro-competitive healthcare reforms implemented in several countries. Healthcare organizations thus need to balance competition and cooperation in a situation of 'coopetition'. In this paper we study the individual and organizational determinants of coopetition versus those of cooperation in the price-competitive specialized care sector of the Netherlands. We use shared medical specialists as a proxy of collaboration between healthcare organizations. Based on a sample of 15,431 medical specialists and 371 specialized care organizations from March 2016, one logistic multi-level model is used to predict medical specialists' likelihood to be shared and another to predict their likelihood to be shared to a competitor. We find that different organizations share different specialists to competitors and non-competitors. Cooperation and coopetition are hence distinct organizational strategies in health care. Cooperation manifests through spin-off formation. Coopetition occurs most among organizations in the price-competitive market segment but in alternative geographical markets. Hence, coopetition in health care does not appear to be particularly anti-competitive. However, healthcare organizations seem reluctant to share their most specialized human resources, limiting the knowledge-sharing effects of this type of relation. Therefore, it remains unclear whether coopetition in health care is beneficial to patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
GMODWeb: a web framework for the generic model organism database
O'Connor, Brian D; Day, Allen; Cain, Scott; Arnaiz, Olivier; Sperling, Linda; Stein, Lincoln D
2008-01-01
The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of model organism database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from . PMID:18570664
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Harmonizing the interpretation of genetic variants across the world: the Malaysian experience.
Hassan, Nik Norliza Nik; Plazzer, John-Paul; Smith, Timothy D; Halim-Fikri, Hashim; Macrae, Finlay; Zubaidi, A A L; Zilfalil, Bin Alwi
2016-02-26
Databases for gene variants are very useful for sharing genetic data and to facilitate the understanding of the genetic basis of diseases. This report summarises the issues surrounding the development of the Malaysian Human Variome Project Country Node. The focus is on human germline variants. Somatic variants, mitochondrial variants and other types of genetic variation have corresponding databases which are not covered here, as they have specific issues that do not necessarily apply to germline variations. The ethical, legal, social issues, intellectual property, ownership of the data, information technology implementation, and efforts to improve the standards and systems used in data sharing are discussed. An overarching framework such as provided by the Human Variome Project to co-ordinate activities is invaluable. Country Nodes, such as MyHVP, enable human gene variation associated with human diseases to be collected, stored and shared by all disciplines (clinicians, molecular biologists, pathologists, bioinformaticians) for a consistent interpretation of genetic variants locally and across the world.
Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.
Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick
2013-01-01
Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.
MedBlock: Efficient and Secure Medical Data Sharing Via Blockchain.
Fan, Kai; Wang, Shangyang; Ren, Yanhui; Li, Hui; Yang, Yintang
2018-06-21
With the development of electronic information technology, electronic medical records (EMRs) have been a common way to store the patients' data in hospitals. They are stored in different hospitals' databases, even for the same patient. Therefore, it is difficult to construct a summarized EMR for one patient from multiple hospital databases due to the security and privacy concerns. Meanwhile, current EMRs systems lack a standard data management and sharing policy, making it difficult for pharmaceutical scientists to develop precise medicines based on data obtained under different policies. To solve the above problems, we proposed a blockchain-based information management system, MedBlock, to handle patients' information. In this scheme, the distributed ledger of MedBlock allows the efficient EMRs access and EMRs retrieval. The improved consensus mechanism achieves consensus of EMRs without large energy consumption and network congestion. In addition, MedBlock also exhibits high information security combining the customized access control protocols and symmetric cryptography. MedBlock can play an important role in the sensitive medical information sharing.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
International energy: Research organizations, 1986--1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, P.; Jordan, S.
The International Energy: Research Organizations publication contains the standardized names of energy research organizations used in energy information databases. Involved in this cooperative task are (1) the technical staff of the USDOE Office of Scientific and Technical Information (OSTI) in cooperation with the member countries of the Energy Technology Data Exchange (ETDE) and (2) the International Nuclear Information System (INIS). This publication identifies current organizations doing research in all energy fields, standardizes the format for recording these organization names in bibliographic citations, assigns a numeric code to facilitate data entry, and identifies report number prefixes assigned by these organizations. Thesemore » research organization names may be used in searching the databases Energy Science Technology'' on DIALOG and Energy'' on STN International. These organization names are also used in USDOE databases on the Integrated Technical Information System. Research organizations active in the past five years, as indicated by database records, were identified to form this publication. This directory includes approximately 34,000 organizations that reported energy-related literature from 1986 to 1990 and updates the DOE Energy Data Base: Corporate Author Entries.« less
Code of Federal Regulations, 2014 CFR
2014-10-01
... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...
The purpose of this SOP is to describe the database storage organization, and to describe the sources of data for each database used during the Arizona NHEXAS project and the Border study. Keywords: data; database; organization.
The U.S.-Mexico Border Program is sponsored by t...
A secure data outsourcing scheme based on Asmuth-Bloom secret sharing
NASA Astrophysics Data System (ADS)
Idris Muhammad, Yusuf; Kaiiali, Mustafa; Habbal, Adib; Wazan, A. S.; Sani Ilyasu, Auwal
2016-11-01
Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users' queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients' data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth-Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing.
Integration of a neuroimaging processing pipeline into a pan-canadian computing grid
NASA Astrophysics Data System (ADS)
Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.
2012-02-01
The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.
Implementation of a health data-sharing infrastructure across diverse primary care organizations.
Cole, Allison M; Stephens, Kari A; Keppel, Gina A; Lin, Ching-Ping; Baldwin, Laura-Mae
2014-01-01
Practice-based research networks bring together academic researchers and primary care clinicians to conduct research that improves health outcomes in real-world settings. The Washington, Wyoming, Alaska, Montana, and Idaho region Practice and Research Network implemented a health data-sharing infrastructure across 9 clinics in 3 primary care organizations. Following implementation, we identified challenges and solutions. Challenges included working with diverse primary care organizations, adoption of health information data-sharing technology in a rapidly changing local and national landscape, and limited resources for implementation. Overarching solutions included working with a multidisciplinary academic implementation team, maintaining flexibility, and starting with an established network for primary care organizations. Approaches outlined may generalize to similar initiatives and facilitate adoption of health data sharing in other practice-based research networks.
Implementation of a Health Data-Sharing Infrastructure Across Diverse Primary Care Organizations
Cole, Allison M.; Stephens, Kari A.; Keppel, Gina A.; Lin, Ching-Ping; Baldwin, Laura-Mae
2014-01-01
Practice-based research networks bring together academic researchers and primary care clinicians to conduct research that improves health outcomes in real-world settings. The Washington, Wyoming, Alaska, Montana, and Idaho region Practice and Research Network implemented a health data-sharing infrastructure across 9 clinics in 3 primary care organizations. Following implementation, we identified challenges and solutions. Challenges included working with diverse primary care organizations, adoption of health information data-sharing technology in a rapidly changing local and national landscape, and limited resources for implementation. Overarching solutions included working with a multidisciplinary academic implementation team, maintaining flexibility, and starting with an established network for primary care organizations. Approaches outlined may generalize to similar initiatives and facilitate adoption of health data sharing in other practice-based research networks. PMID:24594564
Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment
2012-09-01
considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases...centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and...acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes
Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley
2010-10-01
Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials. Copyright © 2010 Elsevier Inc. All rights reserved.
Hierarchical Data Distribution Scheme for Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Bhushan, Shashi; Dave, M.; Patel, R. B.
2010-11-01
In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.
Malware detection and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Ken; Lloyd, Levi; Crussell, Jonathan
Embodiments of the invention describe systems and methods for malicious software detection and analysis. A binary executable comprising obfuscated malware on a host device may be received, and incident data indicating a time when the binary executable was received and identifying processes operating on the host device may be recorded. The binary executable is analyzed via a scalable plurality of execution environments, including one or more non-virtual execution environments and one or more virtual execution environments, to generate runtime data and deobfuscation data attributable to the binary executable. At least some of the runtime data and deobfuscation data attributable tomore » the binary executable is stored in a shared database, while at least some of the incident data is stored in a private, non-shared database.« less
Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein
2017-01-01
An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).
Online Information Sharing About Risks: The Case of Organic Food.
Hilverda, Femke; Kuttschreuter, Margôt
2018-03-23
Individuals have to make sense of an abundance of information to decide whether or not to purchase certain food products. One of the means to sense-making is information sharing. This article reports on a quantitative study examining online information sharing behavior regarding the risks of organic food products. An online survey among 535 respondents was conducted in the Netherlands to examine the determinants of information sharing behavior, and their relationships. Structural equation modeling was applied to test both the measurement model and the structural model. Results showed that the intention to share information online about the risks of organic food was low. Conversations and email were the preferred channels to share information; of the social media Facebook stood out. The developed model was found to provide an adequate description of the data. It explained 41% of the variance in information sharing. Injunctive norms and outcome expectancies were most important in predicting online information sharing, followed by information-related determinants. Risk-perception-related determinants showed a significant, but weak, positive relationship with online information sharing. Implications for authorities communicating on risks associated with food are addressed. © 2018 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
DynGO: a tool for visualizing and mining of Gene Ontology and its associations
Liu, Hongfang; Hu, Zhang-Zhi; Wu, Cathy H
2005-01-01
Background A large volume of data and information about genes and gene products has been stored in various molecular biology databases. A major challenge for knowledge discovery using these databases is to identify related genes and gene products in disparate databases. The development of Gene Ontology (GO) as a common vocabulary for annotation allows integrated queries across multiple databases and identification of semantically related genes and gene products (i.e., genes and gene products that have similar GO annotations). Meanwhile, dozens of tools have been developed for browsing, mining or editing GO terms, their hierarchical relationships, or their "associated" genes and gene products (i.e., genes and gene products annotated with GO terms). Tools that allow users to directly search and inspect relations among all GO terms and their associated genes and gene products from multiple databases are needed. Results We present a standalone package called DynGO, which provides several advanced functionalities in addition to the standard browsing capability of the official GO browsing tool (AmiGO). DynGO allows users to conduct batch retrieval of GO annotations for a list of genes and gene products, and semantic retrieval of genes and gene products sharing similar GO annotations. The result are shown in an association tree organized according to GO hierarchies and supported with many dynamic display options such as sorting tree nodes or changing orientation of the tree. For GO curators and frequent GO users, DynGO provides fast and convenient access to GO annotation data. DynGO is generally applicable to any data set where the records are annotated with GO terms, as illustrated by two examples. Conclusion We have presented a standalone package DynGO that provides functionalities to search and browse GO and its association databases as well as several additional functions such as batch retrieval and semantic retrieval. The complete documentation and software are freely available for download from the website . PMID:16091147
A web-based, relational database for studying glaciers in the Italian Alps
NASA Astrophysics Data System (ADS)
Nigrelli, G.; Chiarle, M.; Nuzzi, A.; Perotti, L.; Torta, G.; Giardino, M.
2013-02-01
Glaciers are among the best terrestrial indicators of climate change and thus glacier inventories have attracted a growing, worldwide interest in recent years. In Italy, the first official glacier inventory was completed in 1925 and 774 glacial bodies were identified. As the amount of data continues to increase, and new techniques become available, there is a growing demand for computer tools that can efficiently manage the collected data. The Research Institute for Geo-hydrological Protection of the National Research Council, in cooperation with the Departments of Computer Science and Earth Sciences of the University of Turin, created a database that provides a modern tool for storing, processing and sharing glaciological data. The database was developed according to the need of storing heterogeneous information, which can be retrieved through a set of web search queries. The database's architecture is server-side, and was designed by means of an open source software. The website interface, simple and intuitive, was intended to meet the needs of a distributed public: through this interface, any type of glaciological data can be managed, specific queries can be performed, and the results can be exported in a standard format. The use of a relational database to store and organize a large variety of information about Italian glaciers collected over the last hundred years constitutes a significant step forward in ensuring the safety and accessibility of such data. Moreover, the same benefits also apply to the enhanced operability for handling information in the future, including new and emerging types of data formats, such as geographic and multimedia files. Future developments include the integration of cartographic data, such as base maps, satellite images and vector data. The relational database described in this paper will be the heart of a new geographic system that will merge data, data attributes and maps, leading to a complete description of Italian glacial environments.
Development and Mining of a Volatile Organic Compound Database
Abdullah, Azian Azamimi; Ono, Naoaki; Sugiura, Tadao; Morita, Aki Hirai; Katsuragi, Tetsuo; Muto, Ai; Nishioka, Takaaki; Kanaya, Shigehiko
2015-01-01
Volatile organic compounds (VOCs) are small molecules that exhibit high vapor pressure under ambient conditions and have low boiling points. Although VOCs contribute only a small proportion of the total metabolites produced by living organisms, they play an important role in chemical ecology specifically in the biological interactions between organisms and ecosystems. VOCs are also important in the health care field as they are presently used as a biomarker to detect various human diseases. Information on VOCs is scattered in the literature until now; however, there is still no available database describing VOCs and their biological activities. To attain this purpose, we have developed KNApSAcK Metabolite Ecology Database, which contains the information on the relationships between VOCs and their emitting organisms. The KNApSAcK Metabolite Ecology is also linked with the KNApSAcK Core and KNApSAcK Metabolite Activity Database to provide further information on the metabolites and their biological activities. The VOC database can be accessed online. PMID:26495281
Securely and Flexibly Sharing a Biomedical Data Management System
Wang, Fusheng; Hussels, Phillip; Liu, Peiya
2011-01-01
Biomedical database systems need not only to address the issues of managing complex data, but also to provide data security and access control to the system. These include not only system level security, but also instance level access control such as access of documents, schemas, or aggregation of information. The latter is becoming more important as multiple users can share a single scientific data management system to conduct their research, while data have to be protected before they are published or IP-protected. This problem is challenging as users’ needs for data security vary dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what access level. We develop a comprehensive data access framework for a biomedical data management system SciPort. SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access approach we take makes it possible for multiple users to share the same biomedical data management system with flexible access management and high data security. PMID:21625285
ERIC Educational Resources Information Center
Cawthorne, Jon E.
2010-01-01
Shared leadership theory recognizes leader influence throughout the organization, not just from the top down. This study explores how middle managers from 22 academic libraries in the Pacific West perceive their own agreement, participation and recognition of shared leadership. This survey and framework is the first to examine the extent shared…
ERIC Educational Resources Information Center
Kanters, Michael A.; Bocarro, Jason N.; Filardo, Mary; Edwards, Michael B.; McKenzie, Thomas L.; Floyd, Myron F.
2014-01-01
Background: Partnerships between school districts and community-based organizations to share school facilities during afterschool hours can be an effective strategy for increasing physical activity. However, the perceived cost of shared use has been noted as an important reason for restricting community access to schools. This study examined…
25 Point Implementation Plan to Reform Federal information Technology Management
2010-12-09
8 6 . Develop a strategy for shared services . . . . . . . . . . . . . . . . . . . . . . 8...or shared services exist . Government officials have been trying to adopt best practices for years – from the Raines Rules of the 1990s through the...Additionally, leveraging shared services of “commodity” applications such as e-mail across functional organizations allows organizations to redirect
C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training
Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter
2008-01-01
The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178
Electronic Medical Records in Greece and Oman: A Professional's Evaluation of Structure and Value.
Koutzampasopoulou Xanthidou, Ourania; Shuib, Liyana; Xanthidis, Dimitrios; Nicholas, David
2018-06-01
An Electronic Medical Record (EMR) is a patient's database record that can be transmitted securely. There are a diversity of EMR systems for different medical units to choose from. The structure and value of these systems is the focus of this qualitative study, from a medical professional's standpoint, as well as its economic value and whether it should be shared between health organizations. The study took place in the natural setting of the medical units' environments. A purposive sample of 40 professionals in Greece and Oman, was interviewed. The study suggests that: (1) The demographics of the EMR should be divided in categories, not all of them accessible and/or visible by all; (2) The EMR system should follow an open architecture so that more categories and subcategories can be added as needed and following a possible business plan (ERD is suggested); (3) The EMR should be implemented gradually bearing in mind both medical and financial concerns; (4) Sharing should be a patient's decision as the owner of the record. Reaching a certain level of maturity of its implementation and utilization, it is useful to seek the professionals' assessment on the structure and value of such a system.
Smith, Christopher Irwin; Tank, Shantel; Godsoe, William; Levenick, Jim; Strand, Eva; Esque, Todd; Pellmyr, Olle
2011-01-01
Comparative phylogeographic studies have had mixed success in identifying common phylogeographic patterns among co-distributed organisms. Whereas some have found broadly similar patterns across a diverse array of taxa, others have found that the histories of different species are more idiosyncratic than congruent. The variation in the results of comparative phylogeographic studies could indicate that the extent to which sympatrically-distributed organisms share common biogeographic histories varies depending on the strength and specificity of ecological interactions between them. To test this hypothesis, we examined demographic and phylogeographic patterns in a highly specialized, coevolved community – Joshua trees (Yucca brevifolia) and their associated yucca moths. This tightly-integrated, mutually interdependent community is known to have experienced significant range changes at the end of the last glacial period, so there is a strong a priori expectation that these organisms will show common signatures of demographic and distributional changes over time. Using a database of >5000 GPS records for Joshua trees, and multi-locus DNA sequence data from the Joshua tree and four species of yucca moth, we combined paleaodistribution modeling with coalescent-based analyses of demographic and phylgeographic history. We extensively evaluated the power of our methods to infer past population size and distributional changes by evaluating the effect of different inference procedures on our results, comparing our palaeodistribution models to Pleistocene-aged packrat midden records, and simulating DNA sequence data under a variety of alternative demographic histories. Together the results indicate that these organisms have shared a common history of population expansion, and that these expansions were broadly coincident in time. However, contrary to our expectations, none of our analyses indicated significant range or population size reductions at the end of the last glacial period, and the inferred demographic changes substantially predate Holocene climate changes. PMID:22028785
PHYTOTOX: DATABASE DEALING WITH THE EFFECT OF ORGANIC CHEMICALS ON TERRESTRIAL VASCULAR PLANTS
A new database, PHYTOTOX, dealing with the direct effects of exogenously supplied organic chemicals on terrestrial vascular plants is described. The database consists of two files, a Reference File and Effects File. The Reference File is a bibliographic file of published research...
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T
2010-10-01
Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.
FunGene: the functional gene pipeline and repository.
Fish, Jordan A; Chai, Benli; Wang, Qiong; Sun, Yanni; Brown, C Titus; Tiedje, James M; Cole, James R
2013-01-01
Ribosomal RNA genes have become the standard molecular markers for microbial community analysis for good reasons, including universal occurrence in cellular organisms, availability of large databases, and ease of rRNA gene region amplification and analysis. As markers, however, rRNA genes have some significant limitations. The rRNA genes are often present in multiple copies, unlike most protein-coding genes. The slow rate of change in rRNA genes means that multiple species sometimes share identical 16S rRNA gene sequences, while many more species share identical sequences in the short 16S rRNA regions commonly analyzed. In addition, the genes involved in many important processes are not distributed in a phylogenetically coherent manner, potentially due to gene loss or horizontal gene transfer. While rRNA genes remain the most commonly used markers, key genes in ecologically important pathways, e.g., those involved in carbon and nitrogen cycling, can provide important insights into community composition and function not obtainable through rRNA analysis. However, working with ecofunctional gene data requires some tools beyond those required for rRNA analysis. To address this, our Functional Gene Pipeline and Repository (FunGene; http://fungene.cme.msu.edu/) offers databases of many common ecofunctional genes and proteins, as well as integrated tools that allow researchers to browse these collections and choose subsets for further analysis, build phylogenetic trees, test primers and probes for coverage, and download aligned sequences. Additional FunGene tools are specialized to process coding gene amplicon data. For example, FrameBot produces frameshift-corrected protein and DNA sequences from raw reads while finding the most closely related protein reference sequence. These tools can help provide better insight into microbial communities by directly studying key genes involved in important ecological processes.
Pisani, Elizabeth; Botchway, Stella
2017-01-01
Background: Increasingly, biomedical researchers are encouraged or required by research funders and journals to share their data, but there's very little guidance on how to do that equitably and usefully, especially in resource-constrained settings. We performed an in-depth case study of one data sharing pioneer: the WorldWide Antimalarial Resistance Network (WWARN). Methods: The case study included a records review, a quantitative analysis of WAARN-related publications, in-depth interviews with 47 people familiar with WWARN, and a witness seminar involving a sub-set of 11 interviewees. Results: WWARN originally aimed to collate clinical, in vitro, pharmacological and molecular data into linked, open-access databases intended to serve as a public resource to guide antimalarial drug treatment policies. Our study describes how WWARN navigated challenging institutional and academic incentive structures, alongside funders' reluctance to invest in capacity building in malaria-endemic countries, which impeded data sharing. The network increased data contributions by focusing on providing free, online tools to improve the quality and efficiency of data collection, and by inviting collaborative authorship on papers addressing policy-relevant questions that could only be answered through pooled analyses. By July 1, 2016, the database included standardised data from 103 molecular studies and 186 clinical trials, representing 135,000 individual patients. Developing the database took longer and cost more than anticipated, and efforts to increase equity for data contributors are on-going. However, analyses of the pooled data have generated new methods and influenced malaria treatment recommendations globally. Despite not achieving the initial goal of real-time surveillance, WWARN has developed strong data governance and curation tools, which are now being adapted relatively quickly for other diseases. Conclusions: To be useful, data sharing requires investment in long-term infrastructure. To be feasible, it requires new incentive structures that favour the generation of reusable knowledge. PMID:29018840
Pisani, Elizabeth; Botchway, Stella
2017-01-01
Increasingly, biomedical researchers are encouraged or required by research funders and journals to share their data, but there's very little guidance on how to do that equitably and usefully, especially in resource-constrained settings. We performed an in-depth case study of one data sharing pioneer: the WorldWide Antimalarial Resistance Network (WWARN). The case study included a records review, a quantitative analysis of WAARN-related publications, in-depth interviews with 47 people familiar with WWARN, and a witness seminar involving a sub-set of 11 interviewees. WWARN originally aimed to collate clinical, in vitro, pharmacological and molecular data into linked, open-access databases intended to serve as a public resource to guide antimalarial drug treatment policies. Our study describes how WWARN navigated challenging institutional and academic incentive structures, alongside funders' reluctance to invest in capacity building in malaria-endemic countries, which impeded data sharing. The network increased data contributions by focusing on providing free, online tools to improve the quality and efficiency of data collection, and by inviting collaborative authorship on papers addressing policy-relevant questions that could only be answered through pooled analyses. By July 1, 2016, the database included standardised data from 103 molecular studies and 186 clinical trials, representing 135,000 individual patients. Developing the database took longer and cost more than anticipated, and efforts to increase equity for data contributors are on-going. However, analyses of the pooled data have generated new methods and influenced malaria treatment recommendations globally. Despite not achieving the initial goal of real-time surveillance, WWARN has developed strong data governance and curation tools, which are now being adapted relatively quickly for other diseases. To be useful, data sharing requires investment in long-term infrastructure. To be feasible, it requires new incentive structures that favour the generation of reusable knowledge.
NASA Astrophysics Data System (ADS)
Qun, Zeng; Xiaocheng, Zhong
Knowledge sharing means that an individual, team and organization share the knowledge with other members of the organization in the course of activities through the various ways. This paper analyzes the obstacle factors in knowledge sharing based on the technical point, and chooses the Blog technology to build a platform for improving knowledge sharing between individuals. The construction of the platform is an important foundation for information literacy education, and it also can be used to achieve online information literacy education. Finally, it gives a detailed analysis of its functions, advantages and disadvantages.
Harnessing Nutrigenomics: Development of web-based communication, databases, resources, and tools.
Kaput, Jim; Astley, Siân; Renkema, Marten; Ordovas, Jose; van Ommen, Ben
2006-03-01
Nutrient - gene interactions are responsible for maintaining health and preventing or delaying disease. Unbalanced diets for a given genotype lead to chronic diseases such as obesity, diabetes, cardiovascular, and are likely to contribute to increased severity and/or early-onset of many age-related diseases. Many nutrition and many genetic studies still fail to properly include both variables in the design, execution, and analyses of human, laboratory animal, or cell culture experiments. The complexity ofnutrient-gene interactions has led to the realization that strategic international alliances are needed to improve the completeness of nutrigenomic studies - a task beyond the capabilities of a single laboratory team. Eighty-eight researchers from 22 countries recently outlined the issues and challenges for harnessing the nutritional genomics for public and personal health. The next step in the process of forming productive international alliances is the development of a virtual center for organizing collaborations and communications that foster resources sharing, best practices improvements, and creation of databases. We describe here plans and initial efforts of creating the Nutrigenomics Information Portal, a web-based resource for the international nutrigenomics society. This portal aims at becoming the prime source ofinformation and interaction for nutrigenomics scientists through a collaborative effort.
Zhu, Chengsheng; Miller, Maximilian
2018-01-01
Abstract Microbial functional diversification is driven by environmental factors, i.e. microorganisms inhabiting the same environmental niche tend to be more functionally similar than those from different environments. In some cases, even closely phylogenetically related microbes differ more across environments than across taxa. While microbial similarities are often reported in terms of taxonomic relationships, no existing databases directly link microbial functions to the environment. We previously developed a method for comparing microbial functional similarities on the basis of proteins translated from their sequenced genomes. Here, we describe fusionDB, a novel database that uses our functional data to represent 1374 taxonomically distinct bacteria annotated with available metadata: habitat/niche, preferred temperature, and oxygen use. Each microbe is encoded as a set of functions represented by its proteome and individual microbes are connected via common functions. Users can search fusionDB via combinations of organism names and metadata. Moreover, the web interface allows mapping new microbial genomes to the functional spectrum of reference bacteria, rendering interactive similarity networks that highlight shared functionality. fusionDB provides a fast means of comparing microbes, identifying potential horizontal gene transfer events, and highlighting key environment-specific functionality. PMID:29112720
From Chaos to Content: An Integrated Approach to Government Web Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demuth, Nora H.; Knudson, Christa K.
2005-01-03
The web development team of the Environmental Technology Directorate (ETD) at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) redesigned the ETD website as a database-driven system, powered by the newly designed ETD Common Information System (ETD-CIS). The ETD website was redesigned in response to an analysis that showed the previous ETD websites were inefficient, costly, and lacking in a consistent focus. Redesigned and newly created websites based on a new ETD template provide a consistent image, meet or exceed accessibility standards, and are linked through a common database. The protocols used in developing the ETD website supportmore » integration of further organizational sites and facilitate internal use by staff and training on ETD website development and maintenance. Other PNNL organizations have approached the ETD web development team with an interest in applying the methods established by the ETD system. The ETD system protocol could potentially be used by other DOE laboratories to improve their website efficiency and content focus. “The tools by which we share science information must be as extraordinary as the information itself.[ ]” – DOE Science Director Raymond Orbach« less
Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini
2017-01-01
The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions. PMID:29151773
Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini
2013-01-01
The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12 th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions.
ERIC Educational Resources Information Center
Kalleberg, Arne L.; Knoke, David; Marsden, Peter V.; Spaeth, Joe L.
In 1991 the National Organizations Study (NOS) surveyed a number of U.S. businesses about their structure, context, and personnel practices to produce a database for answering questions about social behavior in work organizations. This book presents the results of that survey. The study aimed to create a national database on organizations--based…
The INGV Real Time Strong Motion Database
NASA Astrophysics Data System (ADS)
Massa, Marco; D'Alema, Ezio; Mascandola, Claudia; Lovati, Sara; Scafidi, Davide; Gomez, Antonio; Carannante, Simona; Franceschina, Gianlorenzo; Mirenna, Santi; Augliera, Paolo
2017-04-01
The INGV real time strong motion data sharing is assured by the INGV Strong Motion Database. ISMD (http://ismd.mi.ingv.it) was designed in the last months of 2011 in cooperation among different INGV departments, with the aim to organize the distribution of the INGV strong-motion data using standard procedures for data acquisition and processing. The first version of the web portal was published soon after the occurrence of the 2012 Emilia (Northern Italy), Mw 6.1, seismic sequence. At that time ISMD was the first European real time web portal devoted to the engineering seismology community. After four years of successfully operation, the thousands of accelerometric waveforms collected in the archive need necessary a technological improvement of the system in order to better organize the new data archiving and to make more efficient the answer to the user requests. ISMD 2.0 was based on PostgreSQL (www.postgresql.org), an open source object- relational database. The main purpose of the web portal is to distribute few minutes after the origin time the accelerometric waveforms and related metadata of the Italian earthquakes with ML≥3.0. Data are provided both in raw SAC (counts) and automatically corrected ASCII (gal) formats. The web portal also provide, for each event, a detailed description of the ground motion parameters (i.e. Peak Ground Acceleration, Velocity and Displacement, Arias and Housner Intensities) data converted in velocity and displacement, response spectra up to 10.0 s and general maps concerning the recent and the historical seismicity of the area together with information about its seismic hazard. The focal parameters of the events are provided by the INGV National Earthquake Center (CNT, http://cnt.rm.ingv.it). Moreover, the database provides a detailed site characterization section for each strong motion station, based on geological, geomorphological and geophysical information. At present (i.e. January 2017), ISMD includes 987 (121.185 waveforms) Italian earthquakes with ML≥3.0, recorded since the 1st January 2012, besides 204 accelerometric stations belonging to the INGV strong motion network and regional partner.
Embedding learning from adverse incidents: a UK case study.
Eshareturi, Cyril; Serrant, Laura
2017-04-18
Purpose This paper reports on a regionally based UK study uncovering what has worked well in learning from adverse incidents in hospitals. The purpose of this paper is to review the incident investigation methodology used in identifying strengths or weaknesses and explore the use of a database as a tool to embed learning. Design/methodology/approach Documentary examination was conducted of all adverse incidents reported between 1 June 2011 and 30 June 2012 by three UK National Health Service hospitals. One root cause analysis report per adverse incident for each individual hospital was sent to an advisory group for a review. Using terms of reference supplied, the advisory group feedback was analysed using an inductive thematic approach. The emergent themes led to the generation of questions which informed seven in-depth semi-structured interviews. Findings "Time" and "work pressures" were identified as barriers to using adverse incident investigations as tools for quality enhancement. Methodologically, a weakness in approach was that no criteria influenced the techniques which were used in investigating adverse incidents. Regarding the sharing of learning, the use of a database as a tool to embed learning across the region was not supported. Practical implications Softer intelligence from adverse incident investigations could be usefully shared between hospitals through a regional forum. Originality/value The use of a database as a tool to facilitate the sharing of learning from adverse incidents across the health economy is not supported.
Operating System Support for Shared Hardware Data Structures
2013-01-31
Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at
A Case Study in Software Adaptation
2002-01-01
1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to
2001-01-01
System (GCCS) Track Database Management System (TDBM) (3) GCCS Integrated Imagery and Intelligence (3) Intelligence Shared Data Server (ISDS) General ...The CTH is a powerful model that will allow more than just message systems to exchange information. It could be used for object-oriented databases, as...of the Naval Integrated Tactical Environmental System I (NITES I) is used as a case study to demonstrate the utility of this distributed component
Knowledge sharing within organizations: linking art, theory, scenarios and professional experience
NASA Technical Reports Server (NTRS)
Burton, Y. C.; Bailey, T.
2000-01-01
In this presentation, Burton and Bailey, discuss the challenges and opportunities in developing knowledge sharing systems in organizations. Bailey provides a tool using imagery and collage for identifying and utilizing the diverse values and beliefs of individuals and groups. Burton reveals findings from a business research study that examines how social construction influences knowledge sharing among task oriented groups.
ERIC Educational Resources Information Center
Caruso, Shirley J.
2017-01-01
This paper serves as an exploration into some of the ways in which organizations can promote, capture, share, and manage the valuable knowledge of their employees. The problem is that employees typically do not share valuable information, skills, or expertise with other employees or with the entire organization. The author uses research as well as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos
Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tablesmore » of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are within 1% of those calculated using Monte Carlo simulations with the same geometry and scan parameters for all organs except red bone marrow (within 6%), and within 23% of published estimates for different voxelized phantoms. Results from the example of using the database to estimate organ dose for coronary angiography CT acquisitions show 2.1%, 1.1%, and -32% change in breast dose and 2.1%, -0.74%, and 4.7% change in lung dose for reduced kVp, tube current modulated, and partial angle protocols, respectively, relative to the reference protocol. Results show -19.2% difference in dose to eye lens for a tilted scan relative to a nontilted scan. The reported relative changes in organ doses are presented without quantification of image quality and are for the sole purpose of demonstrating the use of the proposed database. Conclusions: The proposed database and calculation method enable the estimation of organ dose for coronary angiography and brain perfusion CT scans utilizing any spectral shape and angular tube current modulation scheme by taking advantage of the precalculated Monte Carlo simulation results. The database can be used in conjunction with image quality studies to develop optimized acquisition techniques and may be particularly beneficial for optimizing dual kVp acquisitions for which numerous kV, mA, and filtration combinations may be investigated.« less
Haemophilia care in Europe - A survey of 37 countries.
Mahony, B O; Savini, L; Hara, J O; Bok, A
2017-07-01
The European Haemophilia Consortium (EHC) is an international non-profit organization representing 45 national patients' organizations in Europe. Every 3 years, the EHC circulates a survey to its national member organizations to assess the state of haemophilia care. The purpose of this exercise is to ascertain information about the organization of haemophilia care and treatment availability at national levels. Furthermore, the survey provides a basis from which the EHC are able to monitor the unmet need and stability of care/treatment access in the individual member countries. Surveys are distributed to EHC member organizations in English and Russian. Patient organizations are encouraged to share the survey with local clinicians to ensure accuracy of responses. The data collected are in part consistent to provide a longitudinal overview for treatment access, but topical items are included such as ageing. Subsequently, completed surveys are transposed into a database for analysis and reporting. Thirty-seven responses were received from the 45 countries approached, representing an 82% response rate from members. Findings suggest increased access to treatment and some improvement in certain areas of care. However, access to treatment has declined or remained largely unchanged in some countries. The survey has been a successful exercise in enabling a greater understanding of the current Haemophilia care landscape across Europe. However, there remain unmet needs in various aspects of patient care, and specific examples include psychosocial care and general preparedness for an ageing haemophilia population. © 2017 John Wiley & Sons Ltd.
Towards building a team of intelligent robots
NASA Technical Reports Server (NTRS)
Varanasi, Murali R.; Mehrotra, R.
1987-01-01
Topics addressed include: collision-free motion planning of multiple robot arms; two-dimensional object recognition; and pictorial databases (storage and sharing of the representations of three-dimensional objects).
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
Fei, Lin; Zhao, Jing; Leng, Jiahao; Zhang, Shujian
2017-10-12
The ALIPORC full-text database is targeted at a specific full-text database of acupuncture literature in the Republic of China. Starting in 2015, till now, the database has been getting completed, focusing on books relevant with acupuncture, articles and advertising documents, accomplished or published in the Republic of China. The construction of this database aims to achieve the source sharing of acupuncture medical literature in the Republic of China through the retrieval approaches to diversity and accurate content presentation, contributes to the exchange of scholars, reduces the paper damage caused by paging and simplify the retrieval of the rare literature. The writers have made the explanation of the database in light of sources, characteristics and current situation of construction; and have discussed on improving the efficiency and integrity of the database and deepening the development of acupuncture literature in the Republic of China.
Knowledge sharing within organizations: linking art, theory, scenarios and professional experience
NASA Technical Reports Server (NTRS)
Bailey, T.; Burton, Y. C.
2000-01-01
In this discussion, T. Bailey will be addressing the multiple paradigms within organizations using imagery. Dr. Burton will discuss the relationship between these paradigms and social exchanges that lead to knowledge sharing.
Vaccarino, Anthony L; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M; Stuss, Donald T; Theriault, Elizabeth; Evans, Kenneth R
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Vaccarino, Anthony L.; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R.; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G.; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F. Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M.; Stuss, Donald T.; Theriault, Elizabeth; Evans, Kenneth R.
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute’s “Brain-CODE” is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care. PMID:29875648
Land, Victoria; Parry, Ruth; Seymour, Jane
2017-12-01
Shared decision making (SDM) is generally treated as good practice in health-care interactions. Conversation analytic research has yielded detailed findings about decision making in health-care encounters. To map decision making communication practices relevant to health-care outcomes in face-to-face interactions yielded by prior conversation analyses, and to examine their function in relation to SDM. We searched nine electronic databases (last search November 2016) and our own and other academics' collections. Published conversation analyses (no restriction on publication dates) using recordings of health-care encounters in English where the patient (and/or companion) was present and where the data and analysis focused on health/illness-related decision making. We extracted study characteristics, aims, findings relating to communication practices, how these functioned in relation to SDM, and internal/external validity issues. We synthesised findings aggregatively. Twenty-eight publications met the inclusion criteria. We sorted findings into 13 types of communication practices and organized these in relation to four elements of decision-making sequences: (i) broaching decision making; (ii) putting forward a course of action; (iii) committing or not (to the action put forward); and (iv) HCPs' responses to patients' resistance or withholding of commitment. Patients have limited opportunities to influence decision making. HCPs' practices may constrain or encourage this participation. Patients, companions and HCPs together treat and undertake decision making as shared, though to varying degrees. Even for non-negotiable treatment trajectories, the spirit of SDM can be invoked through practices that encourage participation (eg by bringing the patient towards shared understanding of the decision's rationale). © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.
12 CFR 563b.565 - What must the charitable organization include in its organizational documents?
Code of Federal Regulations, 2010 CFR
2010-01-01
... in its organizational documents? 563b.565 Section 563b.565 Banks and Banking OFFICE OF THRIFT... organizational documents? The charitable organization's charter (or trust agreement) and gift instrument must... community; (b) As long as the charitable organization controls shares, it must vote those shares in the same...
Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein
2017-01-01
An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled “biosigdata.com.” It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf). PMID:28487832
Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA).
Bravo, Elena; Calzolari, Alessia; De Castro, Paola; Mabile, Laurence; Napolitani, Federica; Rossi, Anna Maria; Cambon-Thomsen, Anne
2015-02-17
Many biomedical publications refer to data obtained from collections of biosamples. Sharing such bioresources (biological samples, data, and databases) is paramount for the present governance of research. Recognition of the effort involved in generating, maintaining, and sharing high quality bioresources is poorly organized, which does not encourage sharing. At publication level, the recognition of such resources is often neglected and/or highly heterogeneous. This is a true handicap for the traceability of bioresource use. The aim of this article is to propose, for the first time, a guideline for reporting bioresource use in research articles, named CoBRA: Citation of BioResources in journal Articles. As standards for citing bioresources are still lacking, the members of the journal editors subgroup of the Bioresource Research Impact Factor (BRIF) initiative developed a standardized and appropriate citation scheme for such resources by informing stakeholders about the subject and raising awareness among scientists and in science editors' networks, mapping this topic among other relevant initiatives, promoting actions addressed to stakeholders, launching surveys, and organizing focused workshops. The European Association of Science Editors has adopted BRIF's suggestion to incorporate statements on biobanks in the Methods section of their guidelines. The BRIF subgroup agreed upon a proposed citation system: each individual bioresource that is used to perform a study and that is mentioned in the Methods section should be cited as an individual "reference [BIORESOURCE]" according to a delineated format. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) network mentioned the proposed reporting guideline in their "guidelines under development" section. Evaluating bioresources' use and impact requires that publications accurately cite such resources. Adopting the standard citation scheme described here will improve the quality of bioresource reporting and will allow their traceability in scientific publications, thus increasing the recognition of bioresources' value and relevance to research. Please see related article: http://dx.doi.org/10.1186/s12916-015-0284-9.
Can psychology walk the walk of open science?
Hesse, Bradford W
2018-01-01
An "open science movement" is gaining traction across many disciplines within the research enterprise but is also precipitating consternation among those who worry that too much disruption may be hampering professional productivity. Despite this disruption, proponents of open data collaboration have argued that some of the biggest problems of the 21st century need to be solved with the help of many people and that data sharing will be the necessary engine to make that happen. In the United States, a national strategic plan for data sharing encouraged the federally funded scientific agencies to (a) publish open data for community use in discoverable, machine-readable, and useful ways; (b) work with public and civil society organizations to set priorities for data to be shared; (c) support innovation and feedback on open data solutions; and (d) continue efforts to release and enhance high-priority data sets funded by taxpayer dollars. One of the more visible open data projects in the psychological sciences is the presidentially announced "Brain Research Through Advancing Innovative Neurotechnologies" (BRAIN) initiative. Lessons learned from initiatives such as these are instructive both from the perspective of open science within psychology and from the perspective of understanding the psychology of open science. Recommendations for creating better pathways to "walk the walk" in open science include (a) nurturing innovation and agile learning, (b) thinking outside the paradigm, (c) creating simplicity from complexity, and (d) participating in continuous learning evidence platforms. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Workplace social capital in nursing: an evolutionary concept analysis.
Read, Emily A
2014-05-01
To report an analysis of the concept of nurses' workplace social capital. Workplace social capital is an emerging concept in nursing with potential to illuminate the value of social relationships at work. A common definition is needed. Concept analysis. The Cumulative Index to Nursing and Allied Health Literature, PubMed, PsychINFO and ProQuest Nursing. Databases were systematically searched using the keywords: workplace social capital, employee social capital, work environment, social capital and nursing. Sources published between January 1937-November 2012 in English that described or studied social capital of nurses at work were included. A total of 668 resources were found. After removing 241 duplicates, literature was screened in two phases: (1) titles and abstracts were reviewed (n = 427); and (2) remaining data sources were retrieved and read (n = 70). Eight sources were included in the final analysis. Attributes of nurses' workplace social capital included networks of social relationships at work, shared assets and shared ways of knowing and being. Antecedents were communication, trust and positive leadership practices. Nurses' workplace social capital was associated with positive consequences for nurses, their patients and healthcare organizations. Nurses' workplace social capital is defined as nurses' shared assets and ways of being and knowing that are evident in, and available through, nurses' networks of social relationships at work. Future studies should examine and test relationships between antecedents and consequences of nurses' workplace social capital to understand this important aspect of healthy professional practice environments better. © 2013 John Wiley & Sons Ltd.
Efficient management of high level XMM-Newton science data products
NASA Astrophysics Data System (ADS)
Zolotukhin, Ivan
2015-12-01
Like it is the case for many large projects, XMM-Newton data have been used by the community to produce many valuable higher level data products. However, even after 15 years of the successful mission operation, the potential of these data is not yet fully uncovered, mostly due to the logistical and data management issues. We present a web application, http://xmm-catalog.irap.omp.eu, to highlight an idea that existing public high level data collections generate significant added research value when organized and exposed properly. Several application features such as access to the all-time XMM-Newton photon database and online fitting of extracted sources spectra were never available before. In this talk we share best practices we worked out during the development of this website and discuss their potential use for other large projects generating astrophysical data.
Aposematism increases acoustic diversification and speciation in poison frogs
Santos, Juan C.; Baquero, Margarita; Barrio-Amorós, César; Coloma, Luis A.; Erdtmann, Luciana K.; Lima, Albertina P.; Cannatella, David C.
2014-01-01
Multimodal signals facilitate communication with conspecifics during courtship, but they can also alert eavesdropper predators. Hence, signallers face two pressures: enticing partners to mate and avoiding detection by enemies. Undefended organisms with limited escape abilities are expected to minimize predator recognition over mate attraction by limiting or modifying their signalling. Alternatively, organisms with anti-predator mechanisms such as aposematism (i.e. unprofitability signalled by warning cues) might elaborate mating signals as a consequence of reduced predation. We hypothesize that calls diversified in association with aposematism. To test this, we assembled a large acoustic signal database for a diurnal lineage of aposematic and cryptic/non-defended taxa, the poison frogs. First, we showed that aposematic and non-aposematic species share similar extinction rates, and aposematic lineages diversify more and rarely revert to the non-aposematic phenotype. We then characterized mating calls based on morphological (spectral), behavioural/physiological (temporal) and environmental traits. Of these, only spectral and temporal features were associated with aposematism. We propose that with the evolution of anti-predator defences, reduced predation facilitated the diversification of vocal signals, which then became elaborated or showy via sexual selection. PMID:25320164
Simple system--substantial share: the use of Dictyostelium in cell biology and molecular medicine.
Müller-Taubenberger, Annette; Kortholt, Arjan; Eichinger, Ludwig
2013-02-01
Dictyostelium discoideum offers unique advantages for studying fundamental cellular processes, host-pathogen interactions as well as the molecular causes of human diseases. The organism can be easily grown in large amounts and is amenable to diverse biochemical, cell biological and genetic approaches. Throughout their life cycle Dictyostelium cells are motile, and thus are perfectly suited to study random and directed cell motility with the underlying changes in signal transduction and the actin cytoskeleton. Dictyostelium is also increasingly used for the investigation of human disease genes and the crosstalk between host and pathogen. As a professional phagocyte it can be infected with several human bacterial pathogens and used to study the infection process. The availability of a large number of knock-out mutants renders Dictyostelium particularly useful for the elucidation and investigation of host cell factors. A powerful armory of molecular genetic techniques that have been continuously expanded over the years and a well curated genome sequence, which is accessible via the online database dictyBase, considerably strengthened Dictyostelium's experimental attractiveness and its value as model organism. Copyright © 2012 Elsevier GmbH. All rights reserved.
Crystallography Open Database – an open-access collection of crystal structures
Gražulis, Saulius; Chateigner, Daniel; Downs, Robert T.; Yokochi, A. F. T.; Quirós, Miguel; Lutterotti, Luca; Manakova, Elena; Butkus, Justas; Moeck, Peter; Le Bail, Armel
2009-01-01
The Crystallography Open Database (COD), which is a project that aims to gather all available inorganic, metal–organic and small organic molecule structural data in one database, is described. The database adopts an open-access model. The COD currently contains ∼80 000 entries in crystallographic information file format, with nearly full coverage of the International Union of Crystallography publications, and is growing in size and quality. PMID:22477773
76 FR 19376 - Statement of Organizations, Functions, and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... safety mission. These outside groups include academic organizations, private organizations, and other Federal Agencies. 3. Coordinates the access to large databases for pharmacoepidemiologic and..., procedures, training, and security or databases available to OSE. 3. Acts as focal point for all hardware...
Toward Phase IV, Populating the WOVOdat Database
NASA Astrophysics Data System (ADS)
Ratdomopurbo, A.; Newhall, C. G.; Schwandner, F. M.; Selva, J.; Ueda, H.
2009-12-01
One of challenges for volcanologists is the fact that more and more people are likely to live on volcanic slopes. Information about volcanic activity during unrest should be accurate and rapidly distributed. As unrest may lead to eruption, evacuation may be necessary to minimize damage and casualties. The decision to evacuate people is usually based on the interpretation of monitoring data. Over the past several decades, monitoring volcanoes has used more and more sophisticated instruments. A huge volume of data is collected in order to understand the state of activity and behaviour of a volcano. WOVOdat, The World Organization of Volcano Observatories (WOVO) Database of Volcanic Unrest, will provide context within which scientists can interpret the state of their own volcano, during and between crises. After a decision during the 2000 IAVCEI General Assembly to create WOVOdat, development has passed through several phases, from Concept Development (Phase-I in 2000-2002), Database Design (Phase-II, 2003-2006) and Pilot Testing (Phase-III in 2007-2008). For WOVOdat to be operational, there are still two (2) steps to complete, which are: Database Population (Phase-IV) and Enhancement and Maintenance (Phase-V). Since January 2009, the WOVOdat project is hosted by Earth Observatory of Singapore for at least a 5-year period. According to the original planning in 2002, this 5-year period will be used for completing the Phase-IV. As the WOVOdat design is not yet tested for all types of data, 2009 is still reserved for building the back-end relational database management system (RDBMS) of WOVOdat and testing it with more complex data. Fine-tuning of the WOVOdat’s RDBMS design is being done with each new upload of observatory data. The next and main phase of WOVOdat development will be data population, managing data transfer from multiple observatory formats to WOVOdat format. Data population will depend on two important things, the availability of SQL database in volcano observatories and their data sharing policy. Hence, a strong collaboration with every WOVO observatory is important. For some volcanoes where the data are not in an SQL system, the WOVOdat project will help scientists working on the volcano to start building an SQL database.
The CoFactor database: organic cofactors in enzyme catalysis.
Fischer, Julia D; Holliday, Gemma L; Thornton, Janet M
2010-10-01
Organic enzyme cofactors are involved in many enzyme reactions. Therefore, the analysis of cofactors is crucial to gain a better understanding of enzyme catalysis. To aid this, we have created the CoFactor database. CoFactor provides a web interface to access hand-curated data extracted from the literature on organic enzyme cofactors in biocatalysis, as well as automatically collected information. CoFactor includes information on the conformational and solvent accessibility variation of the enzyme-bound cofactors, as well as mechanistic and structural information about the hosting enzymes. The database is publicly available and can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/CoFactor.
Establishing a shared vision in your organization. Winning strategies to empower your team members.
Rinke, W J
1989-01-01
Today's health-care climate demands that you manage your human resources more effectively. Meeting the dual challenges of providing more with less requires that you tap the vast hidden resources that reside in every one of your team members. Harnessing these untapped energies requires that all of your employees clearly understand the purpose, direction, and the desired future state of your laboratory. Once this image is widely shared, your team members will know their roles in the organization and the contributions they can make to attaining the organization's vision. This shared vision empowers people and enhances their self-esteem as they recognize they are accomplishing a worthy goal. You can create and install a shared vision in your laboratory by adhering to a five-step process. The result will be a unity of purpose that will release the untapped human resources in your organization so that you can do more with less.
Ecker, David J; Sampath, Rangarajan; Willett, Paul; Wyatt, Jacqueline R; Samant, Vivek; Massire, Christian; Hall, Thomas A; Hari, Kumar; McNeil, John A; Büchen-Osmond, Cornelia; Budowle, Bruce
2005-01-01
Background Thousands of different microorganisms affect the health, safety, and economic stability of populations. Many different medical and governmental organizations have created lists of the pathogenic microorganisms relevant to their missions; however, the nomenclature for biological agents on these lists and pathogens described in the literature is inexact. This ambiguity can be a significant block to effective communication among the diverse communities that must deal with epidemics or bioterrorist attacks. Results We have developed a database known as the Microbial Rosetta Stone. The database relates microorganism names, taxonomic classifications, diseases, specific detection and treatment protocols, and relevant literature. The database structure facilitates linkage to public genomic databases. This paper focuses on the information in the database for pathogens that impact global public health, emerging infectious organisms, and bioterrorist threat agents. Conclusion The Microbial Rosetta Stone is available at . The database provides public access to up-to-date taxonomic classifications of organisms that cause human diseases, improves the consistency of nomenclature in disease reporting, and provides useful links between different public genomic and public health databases. PMID:15850481
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-07-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years.
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-01-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years. PMID:10427426
Nonbibliographic Databases in a Corporate Health, Safety, and Environment Organization.
ERIC Educational Resources Information Center
Cubillas, Mary M.
1981-01-01
Summarizes the characteristics of TOXIN, CHEMFILE, and the Product Profile Information System (PPIS), nonbibliographic databases used by Shell Oil Company's Health, Safety, and Environment Organization. (FM)
Challenges in developing medicinal plant databases for sharing ethnopharmacological knowledge.
Ningthoujam, Sanjoy Singh; Talukdar, Anupam Das; Potsangbam, Kumar Singh; Choudhury, Manabendra Dutta
2012-05-07
Major research contributions in ethnopharmacology have generated vast amount of data associated with medicinal plants. Computerized databases facilitate data management and analysis making coherent information available to researchers, planners and other users. Web-based databases also facilitate knowledge transmission and feed the circle of information exchange between the ethnopharmacological studies and public audience. However, despite the development of many medicinal plant databases, a lack of uniformity is still discernible. Therefore, it calls for defining a common standard to achieve the common objectives of ethnopharmacology. The aim of the study is to review the diversity of approaches in storing ethnopharmacological information in databases and to provide some minimal standards for these databases. Survey for articles on medicinal plant databases was done on the Internet by using selective keywords. Grey literatures and printed materials were also searched for information. Listed resources were critically analyzed for their approaches in content type, focus area and software technology. Necessity for rapid incorporation of traditional knowledge by compiling primary data has been felt. While citation collection is common approach for information compilation, it could not fully assimilate local literatures which reflect traditional knowledge. Need for defining standards for systematic evaluation, checking quality and authenticity of the data is felt. Databases focussing on thematic areas, viz., traditional medicine system, regional aspect, disease and phytochemical information are analyzed. Issues pertaining to data standard, data linking and unique identification need to be addressed in addition to general issues like lack of update and sustainability. In the background of the present study, suggestions have been made on some minimum standards for development of medicinal plant database. In spite of variations in approaches, existence of many overlapping features indicates redundancy of resources and efforts. As the development of global data in a single database may not be possible in view of the culture-specific differences, efforts can be given to specific regional areas. Existing scenario calls for collaborative approach for defining a common standard in medicinal plant database for knowledge sharing and scientific advancement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
dbMDEGA: a database for meta-analysis of differentially expressed genes in autism spectrum disorder.
Zhang, Shuyun; Deng, Libin; Jia, Qiyue; Huang, Shaoting; Gu, Junwang; Zhou, Fankun; Gao, Meng; Sun, Xinyi; Feng, Chang; Fan, Guangqin
2017-11-16
Autism spectrum disorders (ASD) are hereditary, heterogeneous and biologically complex neurodevelopmental disorders. Individual studies on gene expression in ASD cannot provide clear consensus conclusions. Therefore, a systematic review to synthesize the current findings from brain tissues and a search tool to share the meta-analysis results are urgently needed. Here, we conducted a meta-analysis of brain gene expression profiles in the current reported human ASD expression datasets (with 84 frozen male cortex samples, 17 female cortex samples, 32 cerebellum samples and 4 formalin fixed samples) and knock-out mouse ASD model expression datasets (with 80 collective brain samples). Then, we applied R language software and developed an interactive shared and updated database (dbMDEGA) displaying the results of meta-analysis of data from ASD studies regarding differentially expressed genes (DEGs) in the brain. This database, dbMDEGA ( https://dbmdega.shinyapps.io/dbMDEGA/ ), is a publicly available web-portal for manual annotation and visualization of DEGs in the brain from data from ASD studies. This database uniquely presents meta-analysis values and homologous forest plots of DEGs in brain tissues. Gene entries are annotated with meta-values, statistical values and forest plots of DEGs in brain samples. This database aims to provide searchable meta-analysis results based on the current reported brain gene expression datasets of ASD to help detect candidate genes underlying this disorder. This new analytical tool may provide valuable assistance in the discovery of DEGs and the elucidation of the molecular pathogenicity of ASD. This database model may be replicated to study other disorders.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-30
... and trade the shares of the following under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares... proposes to list and trade the shares (``Shares'') of the PowerShares China A-Share Portfolio (``Fund... with the Commission as an open-end management investment company.\\6\\ \\4\\ A Managed Fund Share is a...
The USA National Phenology Network's Model for Collaborative Data Generation and Dissemination
NASA Astrophysics Data System (ADS)
Rosemartin, A.; Lincicome, A.; Denny, E. G.; Marsh, L.; Wilson, B. E.
2010-12-01
The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. The Network was founded as an NSF-funded Research Coordination Network, for the purpose of fostering collaboration among scientists, policy-makers and the general public to address the challenges posed by global change and its impact on ecosystems and human health. With this mission in mind, the USA-NPN has developed an Information Management System (IMS) to facilitate collaboration and participatory data collection and digitization. The IMS includes components for data storage, such as the National Phenology Database, as well as a Drupal website for information-sharing and data visualization, and a Java application for collection of contemporary observational data. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data and to be flexible to the changing needs of the network. The database allows for the collection, storage and output of phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), as well as integration with legacy data sets. Participants in the network can submit records (as Drupal content types) for publications, legacy data sets and phenology-related festivals. The USA-NPN’s contemporary phenology data collection effort, Nature’s Notebook also draws on the contributions of participants. Citizen scientists around the country submit data through this Java application (paired with the Drupal site through a shared login) on the life cycle stages of plants and animals in their yards and parks. The North American Bird Phenology Program, now a part of the USA-NPN, also relies on web-based crowdsourcing. Participants in this program are transcribing 6 million scanned paper cards that were collected by observers across the United States from 1880-1970 of migratory bird arrivals. The USA-NPN’s Information Management System represents a collaborative effort to collect, store, synthesize and output phenological data and information for plants, animals and the environment, and is poised to play an key role in understanding phenological response to environmental and climatic change at the local, regional and national scale.
NASA Astrophysics Data System (ADS)
Kuo, K. S.; Rilee, M. L.
2017-12-01
Existing pathways for bringing together massive, diverse Earth Science datasets for integrated analyses burden end users with data packaging and management details irrelevant to their domain goals. The major data repositories focus on archival, discovery, and dissemination of products (files) in a standardized manner. End-users must download and then adapt these files using local resources and custom methods before analysis can proceed. This reduces scientific or other domain productivity, as scarce resources and expertise must be diverted to data processing. The Spatio-Temporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions, e.g. conditional subsetting, into integer operations, that takes into account representative spatiotemporal resolutions of the data in the datasets, which is needed for data placement alignment of geo-spatiotemporally diverse data on massive parallel resources. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain specific questions instead of expending their expertise on data processing. While STARE is not tied to any particular computing technology, we have used STARE for visualization and the SciDB array database to analyze Earth Science data on a 28-node compute cluster. STARE's automatic data placement and coupling of geometric and array indexing allows complicated data comparisons to be realized as straightforward database operations like "join." With STARE-enabled automation, SciDB+STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of integrable data, and easing result sharing. Using SciDB+STARE as part of an integrated analysis infrastructure, we demonstrate the dramatic ease of combining diametrically different datasets, i.e. gridded (NMQ radar) vs. spacecraft swath (TRMM). SciDB+STARE is an important step towards a computational infrastructure for integrating and sharing diverse, complex Earth Science data and science products derived from them.
Implementation of the CUAHSI information system for regional hydrological research and workflow
NASA Astrophysics Data System (ADS)
Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid
2013-04-01
Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.
2010-01-01
Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402
Bhattacharyya, Sanghita; Berhanu, Della; Taddesse, Nolawi; Srivastava, Aradhana; Wickremasinghe, Deepthi; Schellenberg, Joanna
2016-01-01
Many low- and middle-income countries have pluralistic health systems where private for-profit and not-for-profit sectors complement the public sector: data shared across sectors can provide information for local decision-making. The third article in a series of four on district decision-making for health in low-income settings, this study shows the untapped potential of existing data through documenting the nature and type of data collected by the public and private health systems, data flow and sharing, use and inter-sectoral linkages in India and Ethiopia. In two districts in each country, semi-structured interviews were conducted with administrators and data managers to understand the type of data maintained and linkages with other sectors in terms of data sharing, flow and use. We created a database of all data elements maintained at district level, categorized by form and according to the six World Health Organization health system blocks. We used content analysis to capture the type of data available for different health system levels. Data flow in the public health sectors of both counties is sequential, formal and systematic. Although multiple sources of data exist outside the public health system, there is little formal sharing of data between sectors. Though not fully operational, Ethiopia has better developed formal structures for data sharing than India. In the private and public sectors, health data in both countries are collected in all six health system categories, with greatest focus on service delivery data and limited focus on supplies, health workforce, governance and contextual information. In the Indian private sector, there is a better balance than in the public sector of data across the six categories. In both India and Ethiopia the majority of data collected relate to maternal and child health. Both countries have huge potential for increased use of health data to guide district decision-making. PMID:27591203
2011-01-01
Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655
de la Cruz, J Salvador; Sally, Mitchell B; Zatarain, John R; Crutchfield, Megan; Ramsey, Katrina; Nielsen, Jamison; Patel, Madhukar; Lapidus, Jodi; Orloff, Susan; Malinoski, Darren J
2015-10-01
Historically, strategies to reduce acute rejection and improve graft survival in kidney transplant recipients included blood transfusions (BTs) before transplantation. While advents in recipient immunosuppression strategies have replaced this practice, the impact of BTs in the organ donor on recipient graft outcomes has not been evaluated. We hypothesize that BTs in organ donors after neurologic determination of death (DNDDs) translate into improved recipient renal graft outcomes, as measured by a decrease in delayed graft function (DGF). Donor demographics, critical care end points, the use of BTs, and graft outcome data were prospectively collected on DNDDs from March 2012 to October 2013 in the United Network for Organ Sharing Region 5 Donor Management Database. Propensity analysis determined each DNDD's probability of receiving packed red blood cells based on demographic and critical care data as well as provider bias. The primary outcome measure was the rate of DGF (dialysis in the first week after transplantation) in different donor BT groups as follows: no BT, any BT, 1 to 5, 6 to 10, or greater than 10 packed red blood cell units. Regression models determined the relationship between donor BTs and recipient DGF after accounting for known predictors of DGF as well as the propensity to receive a BT. Data were complete for 1,884 renal grafts from 1,006 DNDDs; 52% received any BT, 32% received 1 to 5 U, 11% received 6 to 10, and 9% received greater than 10 U of blood. Grafts from transfused donors had a lower rate of DGF compared with those of the nontransfused donors (26% vs. 34%, p < 0.001). After adjusting for known confounders, grafts from donors with any BT had a lower odds of DGF (odds ratio, 0.76; p = 0.030), and this effect was greatest in those with greater than 10 U transfused. Any BT in a DNDD was associated with a 23% decrease in the odds of recipients developing DGF, and this effect was more pronounced as the number of BTs increased. Therapeutic study, level III; epidemiologic/prognostic study, level II.
ERIC Educational Resources Information Center
Myint-U, Athi; O'Donnell, Lydia; Osher, David; Petrosino, Anthony; Stueve, Ann
2008-01-01
Despite evidence that some dropout prevention programs have positive effects, whether districts in the region are using such evidence-based programs has not been documented. To generate and share knowledge on dropout programs and policies, this report details a project to create a searchable database with information on target audiences,…
Ontology-based geospatial data query and integration
Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.
2008-01-01
Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.
Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu
2017-12-12
Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA's complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform.
BRCA Share: A Collection of Clinical BRCA Gene Variants.
Béroud, Christophe; Letovsky, Stanley I; Braastad, Corey D; Caputo, Sandrine M; Beaudoux, Olivia; Bignon, Yves Jean; Bressac-De Paillerets, Brigitte; Bronner, Myriam; Buell, Crystal M; Collod-Béroud, Gwenaëlle; Coulet, Florence; Derive, Nicolas; Divincenzo, Christina; Elzinga, Christopher D; Garrec, Céline; Houdayer, Claude; Karbassi, Izabela; Lizard, Sarab; Love, Angela; Muller, Danièle; Nagan, Narasimhan; Nery, Camille R; Rai, Ghadi; Revillion, Françoise; Salgado, David; Sévenet, Nicolas; Sinilnikova, Olga; Sobol, Hagay; Stoppa-Lyonnet, Dominique; Toulas, Christine; Trautman, Edwin; Vaur, Dominique; Vilquin, Paul; Weymouth, Katelyn S; Willis, Alecia; Eisenberg, Marcia; Strom, Charles M
2016-12-01
As next-generation sequencing increases access to human genetic variation, the challenge of determining clinical significance of variants becomes ever more acute. Germline variants in the BRCA1 and BRCA2 genes can confer substantial lifetime risk of breast and ovarian cancer. Assessment of variant pathogenicity is a vital part of clinical genetic testing for these genes. A database of clinical observations of BRCA variants is a critical resource in that process. This article describes BRCA Share™, a database created by a unique international alliance of academic centers and commercial testing laboratories. By integrating the content of the Universal Mutation Database generated by the French Unicancer Genetic Group with the testing results of two large commercial laboratories, Quest Diagnostics and Laboratory Corporation of America (LabCorp), BRCA Share™ has assembled one of the largest publicly accessible collections of BRCA variants currently available. Although access is available to academic researchers without charge, commercial participants in the project are required to pay a support fee and contribute their data. The fees fund the ongoing curation effort, as well as planned experiments to functionally characterize variants of uncertain significance. BRCA Share™ databases can therefore be considered as models of successful data sharing between private companies and the academic world. © 2016 WILEY PERIODICALS, INC.
TCW: Transcriptome Computational Workbench
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R.
2013-01-01
Background The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. Methodology The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. Conclusion It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw. PMID:23874959
TCW: transcriptome computational workbench.
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R
2013-01-01
The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw.
Nishimura, Osamu; Hirao, Yukako; Tarui, Hiroshi; Agata, Kiyokazu
2012-06-29
Planarians are considered to be among the extant animals close to one of the earliest groups of organisms that acquired a central nervous system (CNS) during evolution. Planarians have a bilobed brain with nine lateral branches from which a variety of external signals are projected into different portions of the main lobes. Various interneurons process different signals to regulate behavior and learning/memory. Furthermore, planarians have robust regenerative ability and are attracting attention as a new model organism for the study of regeneration. Here we conducted large-scale EST analysis of the head region of the planarian Dugesia japonica to construct a database of the head-region transcriptome, and then performed comparative analyses among related species. A total of 54,752 high-quality EST reads were obtained from a head library of the planarian Dugesia japonica, and 13,167 unigene sequences were produced by de novo assembly. A new method devised here revealed that proteins related to metabolism and defense mechanisms have high flexibility of amino-acid substitutions within the planarian family. Eight-two CNS-development genes were found in the planarian (cf. C. elegans 3; chicken 129). Comparative analysis revealed that 91% of the planarian CNS-development genes could be mapped onto the schistosome genome, but one-third of these shared genes were not expressed in the schistosome. We constructed a database that is a useful resource for comparative planarian transcriptome studies. Analysis comparing homologous genes between two planarian species showed that the potential of genes is important for accumulation of amino-acid substitutions. The presence of many CNS-development genes in our database supports the notion that the planarian has a fundamental brain with regard to evolution and development at not only the morphological/functional, but also the genomic, level. In addition, our results indicate that the planarian CNS-development genes already existed before the divergence of planarians and schistosomes from their common ancestor.
Deployment and Evaluation of an Observations Data Model
NASA Astrophysics Data System (ADS)
Horsburgh, J. S.; Tarboton, D. G.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.
2007-12-01
Environmental observations are fundamental to hydrology and water resources, and the way these data are organized and manipulated either enables or inhibits the analyses that can be performed. The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. This includes an Observations Data Model (ODM) that provides a new and consistent format for the storage and retrieval of environmental observations in a relational database designed to facilitate integrated analysis of large datasets collected by multiple investigators. Within this data model, observations are stored with sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. The design is based upon a relational database model that exposes each single observation as a record, taking advantage of the capability in relational database systems for querying based upon data values and enabling cross dimension data retrieval and analysis. This data model has been deployed, as part of the HIS Server, at the WATERS Network test bed observatories across the U.S where it serves as a repository for real time data in the observatory information system. The ODM holds the data that is then made available to investigators and the public through web services and the Data Access System for Hydrology (DASH) map based interface. In the WATERS Network test bed settings the ODM has been used to ingest, analyze and publish data from a variety of sources and disciplines. This paper will present an evaluation of the effectiveness of this initial deployment and the revisions that are being instituted to address shortcomings. The ODM represents a new, systematic way for hydrologists, scientists, and engineers to organize and share their data and thereby facilitate a fuller integrated understanding of water resources based on more extensive and fully specified information.
Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz
2009-08-25
Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms.
Towards Semantic e-Science for Traditional Chinese Medicine
Chen, Huajun; Mao, Yuxin; Zheng, Xiaoqing; Cui, Meng; Feng, Yi; Deng, Shuiguang; Yin, Aining; Zhou, Chunying; Tang, Jinming; Jiang, Xiaohong; Wu, Zhaohui
2007-01-01
Background Recent advances in Web and information technologies with the increasing decentralization of organizational structures have resulted in massive amounts of information resources and domain-specific services in Traditional Chinese Medicine. The massive volume and diversity of information and services available have made it difficult to achieve seamless and interoperable e-Science for knowledge-intensive disciplines like TCM. Therefore, information integration and service coordination are two major challenges in e-Science for TCM. We still lack sophisticated approaches to integrate scientific data and services for TCM e-Science. Results We present a comprehensive approach to build dynamic and extendable e-Science applications for knowledge-intensive disciplines like TCM based on semantic and knowledge-based techniques. The semantic e-Science infrastructure for TCM supports large-scale database integration and service coordination in a virtual organization. We use domain ontologies to integrate TCM database resources and services in a semantic cyberspace and deliver a semantically superior experience including browsing, searching, querying and knowledge discovering to users. We have developed a collection of semantic-based toolkits to facilitate TCM scientists and researchers in information sharing and collaborative research. Conclusion Semantic and knowledge-based techniques are suitable to knowledge-intensive disciplines like TCM. It's possible to build on-demand e-Science system for TCM based on existing semantic and knowledge-based techniques. The presented approach in the paper integrates heterogeneous distributed TCM databases and services, and provides scientists with semantically superior experience to support collaborative research in TCM discipline. PMID:17493289
Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz
2009-01-01
Background Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. Results We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. Conclusion dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms. PMID:19706156
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Extensions of credit to foreign organizations held by insured state nonmember banks; shares of foreign organizations held in connection with debts previously contracted. 347.114 Section 347.114 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION...
NASA Technical Reports Server (NTRS)
Mejzak, R. S.
1980-01-01
The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.
Protecting Data Privacy in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Jawad, Mohamed; Serrano-Alvarado, Patricia; Valduriez, Patrick
P2P systems are increasingly used for efficient, scalable data sharing. Popular applications focus on massive file sharing. However, advanced applications such as online communities (e.g., medical or research communities) need to share private or sensitive data. Currently, in P2P systems, untrusted peers can easily violate data privacy by using data for malicious purposes (e.g., fraudulence, profiling). To prevent such behavior, the well accepted Hippocratic database principle states that data owners should specify the purpose for which their data will be collected. In this paper, we apply such principles as well as reputation techniques to support purpose and trust in structured P2P systems. Hippocratic databases enforce purpose-based privacy while reputation techniques guarantee trust. We propose a P2P data privacy model which combines the Hippocratic principles and the trust notions. We also present the algorithms of PriServ, a DHT-based P2P privacy service which supports this model and prevents data privacy violation. We show, in a performance evaluation, that PriServ introduces a small overhead.
MaizeGDB update: New tools, data, and interface for the maize model organism database
USDA-ARS?s Scientific Manuscript database
MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...
Haber, Noah; Smith, Emily R; Moscoe, Ellen; Andrews, Kathryn; Audy, Robin; Bell, Winnie; Brennan, Alana T; Breskin, Alexander; Kane, Jeremy C; Karra, Mahesh; McClure, Elizabeth S; Suarez, Elizabeth A
2018-01-01
The pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation. The proliferation of internet-based health news may encourage selection of media and academic research articles that overstate strength of causal inference. We investigated the state of causal inference in health research as it appears at the end of the pathway, at the point of social media consumption. We screened the NewsWhip Insights database for the most shared media articles on Facebook and Twitter reporting about peer-reviewed academic studies associating an exposure with a health outcome in 2015, extracting the 50 most-shared academic articles and media articles covering them. We designed and utilized a review tool to systematically assess and summarize studies' strength of causal inference, including generalizability, potential confounders, and methods used. These were then compared with the strength of causal language used to describe results in both academic and media articles. Two randomly assigned independent reviewers and one arbitrating reviewer from a pool of 21 reviewers assessed each article. We accepted the most shared 64 media articles pertaining to 50 academic articles for review, representing 68% of Facebook and 45% of Twitter shares in 2015. Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference. Seventy percent of academic studies were considered low or very low strength of inference, with only 6% considered high or very high strength of causal inference. The most severe issues with academic studies' causal inference were reported to be omitted confounding variables and generalizability. Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study. We find a large disparity between the strength of language as presented to the research consumer and the underlying strength of causal inference among the studies most widely shared on social media. However, because this sample was designed to be representative of the articles selected and shared on social media, it is unlikely to be representative of all academic and media work. More research is needed to determine how academic institutions, media organizations, and social network sharing patterns impact causal inference and language as received by the research consumer.
Smith, Emily R.; Moscoe, Ellen; Audy, Robin; Bell, Winnie; Brennan, Alana T.; Breskin, Alexander; Kane, Jeremy C.; Suarez, Elizabeth A.
2018-01-01
Background The pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation. The proliferation of internet-based health news may encourage selection of media and academic research articles that overstate strength of causal inference. We investigated the state of causal inference in health research as it appears at the end of the pathway, at the point of social media consumption. Methods We screened the NewsWhip Insights database for the most shared media articles on Facebook and Twitter reporting about peer-reviewed academic studies associating an exposure with a health outcome in 2015, extracting the 50 most-shared academic articles and media articles covering them. We designed and utilized a review tool to systematically assess and summarize studies’ strength of causal inference, including generalizability, potential confounders, and methods used. These were then compared with the strength of causal language used to describe results in both academic and media articles. Two randomly assigned independent reviewers and one arbitrating reviewer from a pool of 21 reviewers assessed each article. Results We accepted the most shared 64 media articles pertaining to 50 academic articles for review, representing 68% of Facebook and 45% of Twitter shares in 2015. Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference. Seventy percent of academic studies were considered low or very low strength of inference, with only 6% considered high or very high strength of causal inference. The most severe issues with academic studies’ causal inference were reported to be omitted confounding variables and generalizability. Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study. Conclusions We find a large disparity between the strength of language as presented to the research consumer and the underlying strength of causal inference among the studies most widely shared on social media. However, because this sample was designed to be representative of the articles selected and shared on social media, it is unlikely to be representative of all academic and media work. More research is needed to determine how academic institutions, media organizations, and social network sharing patterns impact causal inference and language as received by the research consumer. PMID:29847549
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... for the following securities: Index-Linked Exchangeable Notes; Equity Gold Shares; Trust Certificates; Commodity-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares; Partnership Units; Trust Units; Managed Trust Securities; and Currency Warrants (together with...
Creating an Effective Network: The GRACEnet Example
NASA Astrophysics Data System (ADS)
Follett, R. F.; Del Grosso, S.
2008-12-01
Networking activities require time, work, and nurturing. The objective of this presentation is to share the experience gained from The Greenhouse gas Reduction through Agricultural Carbon Enhancement network (GRACEnet). GRACEnet, formally established in 2005 by the ARS/USDA, resulted from workshops, teleconferences, and other activities beginning in at least 2002. Critical factors for its formation were to develop and formalize a common vision, goals, and objectives, which was accomplished in a 2005 workshop. The 4-person steering committee (now 5) was charged with coordinating the part-time (0.05- to 0.5 SY/location) efforts across 30 ARS locations to develop four products; (1) a national database, (2) regional/national guidelines of management practices, (3) computer models, and (4) "state-of-knowledge" summary publications. All locations are asked to contribute to the database from their field studies. Communication with everyone and periodic meeting are extremely important. Required to populate the database has to be a common vision of sharing, format, and trust. Based upon the e-mail list, GRACEnet has expanded from about 30 to now nearly 70 participants. Annual reports and a new website help facilitate this activity.
ERIC Educational Resources Information Center
Lloyd-Strovas, Jenny D.; Arsuffi, Thomas L.
2016-01-01
We examined the diversity of environmental education (EE) in Texas, USA, by developing a framework to assess EE organizations and programs at a large scale: the Environmental Education Database of Organizations and Programs (EEDOP). This framework consisted of the following characteristics: organization/visitor demographics, pedagogy/curriculum,…
Private sector risk-sharing agreements in the United States: trends, barriers, and prospects.
Garrison, Louis P; Carlson, Josh J; Bajaj, Preeti S; Towse, Adrian; Neumann, Peter J; Sullivan, Sean D; Westrich, Kimberly; Dubois, Robert W
2015-09-01
Risk-sharing agreements (RSAs) between drug manufacturers and payers link coverage and reimbursement to real-world performance or utilization of medical products. These arrangements have garnered considerable attention in recent years. However, greater use outside the United States raises questions as to why their use has been limited in the US private sector, and whether their use might increase in the evolving US healthcare system. To understand current trends, success factors, and challenges in the use of RSAs, we conducted a review of RSAs, interviews, and a survey to understand key stakeholders' experiences and expectations for RSAs in the US private sector. Trends in the numbers of RSAs were assessed using a database of RSAs. We also conducted in-depth interviews with stakeholders from pharmaceutical companies, payer organizations, and industry experts in the United States and European Union. In addition, we administered an online survey with a broader audience to identify perceptions of the future of RSAs in the United States. Most manufacturers and payers expressed interest in RSAs and see potential value in their use. Due to numerous barriers associated with outcomes-based agreements, stakeholders were more optimistic about financial-based RSAs. In the US private sector, however, there remains considerable interest--improved data systems and shifting incentives (via health reform and accountable care organizations) may generate more action. In the US commercial payer markets, there is continued interest among some manufacturers and payers in outcomes-based RSAs. Despite continued discussion and activity, the number of new agreements is still small.
Governance - Alignment and Configuration of Business Activities Task Group Report
2006-05-01
governance level and the Enterprise Model as a way of ensuring integration at the management and work/execution levels 3. Ensure shared services (i.e...Management Framework o QDR Organizational Model o Secretary of Defense 2006-2008 Priorities o Shared Services Defense Business Board...support for horizontal and vertical organizations • Move “supporting” organizations to shared services model May 2006 "Team Defense" 18 Task Group
Numerical cognition explains age-related changes in third-party fairness.
Chernyak, Nadia; Sandham, Beth; Harris, Paul L; Cordes, Sara
2016-10-01
Young children share fairly and expect others to do the same. Yet little is known about the underlying cognitive mechanisms that support fairness. We investigated whether children's numerical competencies are linked with their sharing behavior. Preschoolers (aged 2.5-5.5) participated in third-party resource allocation tasks in which they split a set of resources between 2 puppets. Children's numerical competence was assessed using the Give-N task (Sarnecka & Carey, 2008; Wynn, 1990). Numerical competence-specifically knowledge of the cardinal principle-explained age-related changes in fair sharing. Although many subset-knowers (those without knowledge of the cardinal principle) were still able to share fairly, they invoked turn-taking strategies and did not remember the number of resources they shared. These results suggest that numerical cognition serves as an important mechanism for fair sharing behavior, and that children employ different sharing strategies (division or turn-taking) depending on their numerical competence. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Chiu, Chia-Yen Chad; Owens, Bradley P; Tesluk, Paul E
2016-12-01
The present study was designed to produce novel theoretical insight regarding how leader humility and team member characteristics foster the conditions that promote shared leadership and when shared leadership relates to team effectiveness. Drawing on social information processing theory and adaptive leadership theory, we propose that leader humility facilitates shared leadership by promoting leadership-claiming and leadership-granting interactions among team members. We also apply dominance complementary theory to propose that team proactive personality strengthens the impact of leader humility on shared leadership. Finally, we predict that shared leadership will be most strongly related to team performance when team members have high levels of task-related competence. Using a sample composed of 62 Taiwanese professional work teams, we find support for our proposed hypothesized model. The theoretical and practical implications of these results for team leadership, humility, team composition, and shared leadership are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
5 CFR 950.106 - PCFO expense recovery.
Code of Federal Regulations, 2014 CFR
2014-01-01
... VOLUNTARY ORGANIZATIONS General Provisions § 950.106 PCFO expense recovery. (a) The PCFO shall recover from... 950.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... shared proportionately by all the recipient organizations reflecting their percentage share of gross...
5 CFR 950.106 - PCFO expense recovery.
Code of Federal Regulations, 2012 CFR
2012-01-01
... VOLUNTARY ORGANIZATIONS General Provisions § 950.106 PCFO expense recovery. (a) The PCFO shall recover from... 950.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... shared proportionately by all the recipient organizations reflecting their percentage share of gross...
5 CFR 950.106 - PCFO expense recovery.
Code of Federal Regulations, 2010 CFR
2010-01-01
... VOLUNTARY ORGANIZATIONS General Provisions § 950.106 PCFO expense recovery. (a) The PCFO shall recover from... 950.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... shared proportionately by all the recipient organizations reflecting their percentage share of gross...
5 CFR 950.106 - PCFO expense recovery.
Code of Federal Regulations, 2011 CFR
2011-01-01
... VOLUNTARY ORGANIZATIONS General Provisions § 950.106 PCFO expense recovery. (a) The PCFO shall recover from... 950.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... shared proportionately by all the recipient organizations reflecting their percentage share of gross...
5 CFR 950.106 - PCFO expense recovery.
Code of Federal Regulations, 2013 CFR
2013-01-01
... VOLUNTARY ORGANIZATIONS General Provisions § 950.106 PCFO expense recovery. (a) The PCFO shall recover from... 950.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... shared proportionately by all the recipient organizations reflecting their percentage share of gross...