Development of Databases on Iodine in Foods and Dietary Supplements
Ershow, Abby G.; Skeaff, Sheila A.; Merkel, Joyce M.; Pehrsson, Pamela R.
2018-01-01
Iodine is an essential micronutrient required for normal growth and neurodevelopment; thus, an adequate intake of iodine is particularly important for pregnant and lactating women, and throughout childhood. Low levels of iodine in the soil and groundwater are common in many parts of the world, often leading to diets that are low in iodine. Widespread salt iodization has eradicated severe iodine deficiency, but mild-to-moderate deficiency is still prevalent even in many developed countries. To understand patterns of iodine intake and to develop strategies for improving intake, it is important to characterize all sources of dietary iodine, and national databases on the iodine content of major dietary contributors (including foods, beverages, water, salts, and supplements) provide a key information resource. This paper discusses the importance of well-constructed databases on the iodine content of foods, beverages, and dietary supplements; the availability of iodine databases worldwide; and factors related to variability in iodine content that should be considered when developing such databases. We also describe current efforts in iodine database development in the United States, the use of iodine composition data to develop food fortification policies in New Zealand, and how iodine content databases might be used when considering the iodine intake and status of individuals and populations. PMID:29342090
Content based information retrieval in forensic image databases.
Geradts, Zeno; Bijhold, Jurrien
2002-03-01
This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.
Tufts Health Sciences Database: Lessons, Issues, and Opportunities.
ERIC Educational Resources Information Center
Lee, Mary Y.; Albright, Susan A.; Alkasab, Tarik; Damassa, David A.; Wang, Paul J.; Eaton, Elizabeth K.
2003-01-01
Describes a seven-year experience with developing the Tufts Health Sciences Database, a database-driven information management system that combines the strengths of a digital library, content delivery tools, and curriculum management. Identifies major effects on teaching and learning. Also addresses issues of faculty development, copyright and…
Phenol-Explorer: an online comprehensive database on polyphenol contents in foods.
Neveu, V; Perez-Jiménez, J; Vos, F; Crespy, V; du Chaffaut, L; Mennen, L; Knox, C; Eisner, R; Cruz, J; Wishart, D; Scalbert, A
2010-01-01
A number of databases on the plant metabolome describe the chemistry and biosynthesis of plant chemicals. However, no such database is specifically focused on foods and more precisely on polyphenols, one of the major classes of phytochemicals. As antioxidants, polyphenols influence human health and may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, some cancers or type 2 diabetes. To determine polyphenol intake in populations and study their association with health, it is essential to have detailed information on their content in foods. However this information is not easily collected due to the variety of their chemical structures and the variability of their content in a given food. Phenol-Explorer is the first comprehensive web-based database on polyphenol content in foods. It contains more than 37,000 original data points collected from 638 scientific articles published in peer-reviewed journals. The quality of these data has been evaluated before they were aggregated to produce final representative mean content values for 502 polyphenols in 452 foods. The web interface allows making various queries on the aggregated data to identify foods containing a given polyphenol or polyphenols present in a given food. For each mean content value, it is possible to trace all original content values and their literature sources. Phenol-Explorer is a major step forward in the development of databases on food constituents and the food metabolome. It should help researchers to better understand the role of phytochemicals in the technical and nutritional quality of food, and food manufacturers to develop tailor-made healthy foods. Database URL: http://www.phenol-explorer.eu.
Phenol-Explorer: an online comprehensive database on polyphenol contents in foods
Neveu, V.; Perez-Jiménez, J.; Vos, F.; Crespy, V.; du Chaffaut, L.; Mennen, L.; Knox, C.; Eisner, R.; Cruz, J.; Wishart, D.; Scalbert, A.
2010-01-01
A number of databases on the plant metabolome describe the chemistry and biosynthesis of plant chemicals. However, no such database is specifically focused on foods and more precisely on polyphenols, one of the major classes of phytochemicals. As antoxidants, polyphenols influence human health and may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, some cancers or type 2 diabetes. To determine polyphenol intake in populations and study their association with health, it is essential to have detailed information on their content in foods. However this information is not easily collected due to the variety of their chemical structures and the variability of their content in a given food. Phenol-Explorer is the first comprehensive web-based database on polyphenol content in foods. It contains more than 37 000 original data points collected from 638 scientific articles published in peer-reviewed journals. The quality of these data has been evaluated before they were aggregated to produce final representative mean content values for 502 polyphenols in 452 foods. The web interface allows making various queries on the aggregated data to identify foods containing a given polyphenol or polyphenols present in a given food. For each mean content value, it is possible to trace all original content values and their literature sources. Phenol-Explorer is a major step forward in the development of databases on food constituents and the food metabolome. It should help researchers to better understand the role of phytochemicals in the technical and nutritional quality of food, and food manufacturers to develop tailor-made healthy foods. Database URL: http://www.phenol-explorer.eu PMID:20428313
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
Polyamines in foods: development of a food database
Ali, Mohamed Atiya; Poortvliet, Eric; Strömberg, Roger; Yngve, Agneta
2011-01-01
Background Knowing the levels of polyamines (putrescine, spermidine, and spermine) in different foods is of interest due to the association of these bioactive nutrients to health and diseases. There is a lack of relevant information on their contents in foods. Objective To develop a food polyamine database from published data by which polyamine intake and food contribution to this intake can be estimated, and to determine the levels of polyamines in Swedish dairy products. Design Extensive literature search and laboratory analysis of selected Swedish dairy products. Polyamine contents in foods were collected using an extensive literature search of databases. Polyamines in different types of Swedish dairy products (milk with different fat percentages, yogurt, cheeses, and sour milk) were determined using high performance liquid chromatography (HPLC) equipped with a UV detector. Results Fruits and cheese were the highest sources of putrescine, while vegetables and meat products were found to be rich in spermidine and spermine, respectively. The content of polyamines in cheese varied considerably between studies. In analyzed Swedish dairy products, matured cheese had the highest total polyamine contents with values of 52.3, 1.2, and 2.6 mg/kg for putrescine, spermidine, and spermine, respectively. Low fat milk had higher putrescine and spermidine, 1.2 and 1.0 mg/kg, respectively, than the other types of milk. Conclusions The database aids other researchers in their quest for information regarding polyamine intake from foods. Connecting the polyamine contents in food with the Swedish Food Database allows for estimation of polyamine contents per portion. PMID:21249159
WLN's Database: New Directions.
ERIC Educational Resources Information Center
Ziegman, Bruce N.
1988-01-01
Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)
Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean
NASA Astrophysics Data System (ADS)
Yin, Mengtao; Liu, Guosheng
2017-12-01
A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.
The development of a composition database of gluten-free products.
Mazzeo, Teresa; Cauzzi, Silvia; Brighenti, Furio; Pellegrini, Nicoletta
2015-06-01
To develop a composition database of a number of foods representative of different categories of gluten-free products in the Italian diet. The database was built using the nutritional composition of the products, taking into consideration both the composition of the ingredients and the nutritional information reported on the product label. The nutrient composition of each ingredient was obtained from two Italian databases (European Institute of Oncology and the National Institute for Food and Nutrition). The study developed a food composition database including a total of sixty foods representative of different categories of gluten-free products sold on the Italian market. The composition of the products included in the database is given in terms of quantity of macro- and micronutrients per 100 g of product as sold, and includes the full range of nutrient data present in traditional databases of gluten-containing foods. As expected, most of the products had a high content of carbohydrates and some of them can be labelled as a source of fibre (>3 g/100 g). Regarding micronutrients, among the products considered, breads, pizzas and snacks were especially very high in Na content (>400-500 mg/100 g). This database provides an initial useful tool for future nutritional surveys on the dietary habits of coeliac people.
Urban Neighborhood Information Systems: Crime Prevention and Control Applications.
ERIC Educational Resources Information Center
Pattavina, April; Pierce, Glenn; Saiz, Alan
2002-01-01
Chronicles the need for and development of an interdisciplinary, integrated neighborhood-level database for Boston, Massachusetts, discussing database content and potential applications of this database to a range of criminal justice problems and initiatives (e.g., neighborhood crime patterns, needs assessment, and program planning and…
Application of new type of distributed multimedia databases to networked electronic museum
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1999-01-01
Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.
Search Filter Precision Can Be Improved By NOTing Out Irrelevant Content
Wilczynski, Nancy L.; McKibbon, K. Ann; Haynes, R. Brian
2011-01-01
Background: Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content. Objective: To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target. Design: Analytic survey. Methods: NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database. Main Outcome Measures: Sensitivity, specificity, precision and number needed to read (NNR). Results: For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases). Conclusion: Search filter precision can be improved by NOTing out irrelevant content. PMID:22195215
Development of a web-based video management and application processing system
NASA Astrophysics Data System (ADS)
Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting
2001-07-01
How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.
Viennas, Emmanouil; Komianou, Angeliki; Mizzi, Clint; Stojiljkovic, Maja; Mitropoulou, Christina; Muilu, Juha; Vihinen, Mauno; Grypioti, Panagiota; Papadaki, Styliani; Pavlidis, Cristiana; Zukic, Branka; Katsila, Theodora; van der Spek, Peter J.; Pavlovic, Sonja; Tzimas, Giannis; Patrinos, George P.
2017-01-01
FINDbase (http://www.findbase.org) is a comprehensive data repository that records the prevalence of clinically relevant genomic variants in various populations worldwide, such as pathogenic variants leading mostly to monogenic disorders and pharmacogenomics biomarkers. The database also records the incidence of rare genetic diseases in various populations, all in well-distinct data modules. Here, we report extensive data content updates in all data modules, with direct implications to clinical pharmacogenomics. Also, we report significant new developments in FINDbase, namely (i) the release of a new version of the ETHNOS software that catalyzes development curation of national/ethnic genetic databases, (ii) the migration of all FINDbase data content into 90 distinct national/ethnic mutation databases, all built around Microsoft's PivotViewer (http://www.getpivot.com) software (iii) new data visualization tools and (iv) the interrelation of FINDbase with DruGeVar database with direct implications in clinical pharmacogenomics. The abovementioned updates further enhance the impact of FINDbase, as a key resource for Genomic Medicine applications. PMID:27924022
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
The Dietary Supplement Ingredient Database (DSID) - 3 release.
USDA-ARS?s Scientific Manuscript database
The Dietary Supplement Ingredient Database (DSID) provides analytically-derived estimates of ingredient content in dietary supplement (DS) products sold in the United States. DSID was developed by the Nutrient Data Laboratory (NDL) within the Agricultural Research Service, U.S. Department of Agricu...
Bibliographic Databases Outside of the United States.
ERIC Educational Resources Information Center
McGinn, Thomas P.; And Others
1988-01-01
Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
THE ART OF DATA MINING THE MINEFIELDS OF TOXICITY ...
Toxicity databases have a special role in predictive toxicology, providing ready access to historical information throughout the workflow of discovery, development, and product safety processes in drug development as well as in review by regulatory agencies. To provide accurate information within a hypothesesbuilding environment, the content of the databases needs to be rigorously modeled using standards and controlled vocabulary. The utilitarian purposes of databases widely vary, ranging from a source for (Q)SAR datasets for modelers to a basis for
ETHNOS: A versatile electronic tool for the development and curation of national genetic databases
2010-01-01
National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation. PMID:20650823
ETHNOS : A versatile electronic tool for the development and curation of national genetic databases.
van Baal, Sjozef; Zlotogora, Joël; Lagoumintzis, George; Gkantouna, Vassiliki; Tzimas, Ioannis; Poulas, Konstantinos; Tsakalidis, Athanassios; Romeo, Giovanni; Patrinos, George P
2010-06-01
National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation.
Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro
2013-08-01
Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.
Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F
2011-01-01
Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.
Pharmacology Portal: An Open Database for Clinical Pharmacologic Laboratory Services.
Karlsen Bjånes, Tormod; Mjåset Hjertø, Espen; Lønne, Lars; Aronsen, Lena; Andsnes Berg, Jon; Bergan, Stein; Otto Berg-Hansen, Grim; Bernard, Jean-Paul; Larsen Burns, Margrete; Toralf Fosen, Jan; Frost, Joachim; Hilberg, Thor; Krabseth, Hege-Merete; Kvan, Elena; Narum, Sigrid; Austgulen Westin, Andreas
2016-01-01
More than 50 Norwegian public and private laboratories provide one or more analyses for therapeutic drug monitoring or testing for drugs of abuse. Practices differ among laboratories, and analytical repertoires can change rapidly as new substances become available for analysis. The Pharmacology Portal was developed to provide an overview of these activities and to standardize the practices and terminology among laboratories. The Pharmacology Portal is a modern dynamic web database comprising all available analyses within therapeutic drug monitoring and testing for drugs of abuse in Norway. Content can be retrieved by using the search engine or by scrolling through substance lists. The core content is a substance registry updated by a national editorial board of experts within the field of clinical pharmacology. This ensures quality and consistency regarding substance terminologies and classification. All laboratories publish their own repertoires in a user-friendly workflow, adding laboratory-specific details to the core information in the substance registry. The user management system ensures that laboratories are restricted from editing content in the database core or in repertoires within other laboratory subpages. The portal is for nonprofit use, and has been fully funded by the Norwegian Medical Association, the Norwegian Society of Clinical Pharmacology, and the 8 largest pharmacologic institutions in Norway. The database server runs an open-source content management system that ensures flexibility with respect to further development projects, including the potential expansion of the Pharmacology Portal to other countries. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
Browsing a Database of Multimedia Learning Material.
ERIC Educational Resources Information Center
Persico, Donatella; And Others
1992-01-01
Describes a project that addressed the problem of courseware reusability by developing a database structure suitable for organizing multimedia learning material in a given content domain. A prototype system that allows browsing a DBLM (Data Base of Learning Material) on earth science is described, and future plans are discussed. (five references)…
Database Marketplace 2010: Feast and Famine
ERIC Educational Resources Information Center
Tenopir, Carol; Baker, Gayle; Grogg, Jill
2010-01-01
With fancy new software developments and growth in both the richness of content and delivery options for information resources, the Database Marketplace 2010 is a feast for buyers. Unfortunately, institutional budget cuts may force more of a famine mentality--with belt-tightening for most, and only purchases that are life-sustaining being served…
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Database for content of mercury in Polish brown coal
NASA Astrophysics Data System (ADS)
Jastrząb, Krzysztof
2018-01-01
Poland is rated among the countries with largest level of mercury emission in Europe. According to information provided by the National Centre for Balancing and Management of Emissions (KOBiZE) more than 10.5 tons of mercury and its compounds were emitted into the atmosphere in 2015 from the area of Poland. Within the scope of the BazaHg project lasting from 2014 to 2015 and co-financed from the National Centre of Research and Development (NCBiR) a database was set up with specification of mercury content in Polish hard steam coal, coking coal and brown coal (lignite) grades. With regard to domestic brown coal the database comprises information on coal grades from Brown Coal Mines of `Bełchatów', `Adamów', `Turów' and `Sieniawa'. Currently the database contains 130 records with parameters of brown coal, where each record stands for technical analysis (content of moisture, ash and volatile particles), elemental analysis (CHNS), content of chlorine and mercury as well as net calorific value and combustion heat. Content of mercury in samples of brown coal grades under test ranged from 44 to 985 μg of Hg/kg with the average level of 345 μg of Hg/kg. The established database makes up a reliable and trustworthy source of information about content of mercury in Polish fossils. The foregoing details completed with information about consumption of coal by individual electric power stations and multiplied by appropriate emission coefficients may serve as the background to establish loads of mercury emitted into atmosphere from individual stations and by the entire sector of power engineering in total. It will also enable Polish central organizations and individual business entities to implement reasonable policy with respect of mercury emission into atmosphere.
Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana
2002-01-01
Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.
[Development of an analyzing system for soil parameters based on NIR spectroscopy].
Zheng, Li-Hua; Li, Min-Zan; Sun, Hong
2009-10-01
A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.
Teachers' Professional Development: A Content Analysis about the Tendencies in Studies
ERIC Educational Resources Information Center
Yurtseven, Nihal; Bademcioglu, Mehtap
2016-01-01
The purpose of this study is to carry out a content analysis about the studies on teachers' professional development and to determine the tendencies in these studies. Within this scope, 60 studies that were registered to Turkish National Thesis Centre and ProQuest database between the years 2005-2015 were examined. Of the 60 studies, 37 of them…
Generation of large scale urban environments to support advanced sensor and seeker simulation
NASA Astrophysics Data System (ADS)
Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan
2009-05-01
One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.
SAADA: Astronomical Databases Made Easier
NASA Astrophysics Data System (ADS)
Michel, L.; Nguyen, H. N.; Motch, C.
2005-12-01
Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
CALINVASIVES: a revolutionary tool to monitor invasive threats
M. Garbelotto; S. Drill; C. Powell; J. Malpas
2017-01-01
CALinvasives is a web-based relational database and content management system (CMS) cataloging the statewide distribution of invasive pathogens and pests and the plant hosts they impact. The database has been developed as a collaboration between the Forest Pathology and Mycology Laboratory at UC Berkeley and Calflora. CALinvasives will combine information on the...
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah
2015-09-01
Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.
Applying cognitive load theory to the redesign of a conventional database systems course
NASA Astrophysics Data System (ADS)
Mason, Raina; Seton, Carolyn; Cooper, Graham
2016-01-01
Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional structure for a database course, covering database design first, then database development. Analysis showed the conventional course content was appropriate but the instructional materials used were too complex, especially for novice students. The redesign of instructional materials applied CLT to remove split attention and redundancy effects, to provide suitable worked examples and sub-goals, and included an extensive re-sequencing of content. The approach was primarily directed towards mid- to lower performing students and results showed a significant improvement for this cohort with the exam failure rate reducing by 34% after the redesign on identical final exams. Student satisfaction also increased and feedback from subsequent study was very positive. The application of CLT to the design of instructional materials is discussed for delivery of technical courses.
Development of a reference database for assessing dietary nitrate in vegetables.
Blekkenhorst, Lauren C; Prince, Richard L; Ward, Natalie C; Croft, Kevin D; Lewis, Joshua R; Devine, Amanda; Shinde, Sujata; Woodman, Richard J; Hodgson, Jonathan M; Bondonno, Catherine P
2017-08-01
Nitrate from vegetables improves vascular health with short-term intake. Whether this translates into improved long-term health outcomes has yet to be investigated. To enable reliable analysis of nitrate intake from food records, there is a strong need for a comprehensive nitrate content of vegetables database. A systematic literature search (1980-2016) was performed using Medline, Agricola and Commonwealth Agricultural Bureaux abstracts databases. The nitrate content of vegetables database contains 4237 records from 255 publications with data on 178 vegetables and 22 herbs and spices. The nitrate content of individual vegetables ranged from Chinese flat cabbage (median; range: 4240; 3004-6310 mg/kg FW) to corn (median; range: 12; 5-1091 mg/kg FW). The database was applied to estimate vegetable nitrate intake using 24-h dietary recalls (24-HDRs) and food frequency questionnaires (FFQs). Significant correlations were observed between urinary nitrate excretion and 24-HDR (r = 0.4, P = 0.013), between 24-HDR and 12 month FFQs (r = 0.5, P < 0.001) as well as two 4 week FFQs administered 8 weeks apart (r = 0.86, P < 0.001). This comprehensive nitrate database allows quantification of dietary nitrate from a large variety of vegetables. It can be applied to dietary records to explore the associations between nitrate intake and health outcomes in human studies. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Brief Review of RNA–Protein Interaction Database Resources
Yi, Ying; Zhao, Yue; Huang, Yan; Wang, Dong
2017-01-01
RNA–Protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA–Protein interactions and binding sites from experiments and predictions, RNA–Protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA–Protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA–Protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA–Protein interactions. PMID:29657278
Monitoring the Sodium Content of Restaurant Foods: Public Health Challenges and Opportunities
Cogswell, Mary E.; Gunn, Janelle P.; Curtis, Christine J.; Rhodes, Donna; Hoy, Kathy; Pehrsson, Pamela; Nickle, Melissa; Merritt, Robert
2013-01-01
We reviewed methods of studies assessing restaurant foods’ sodium content and nutrition databases. We systematically searched the 1964–2012 literature and manually examined references in selected articles and studies. Twenty-six (5.2%) of the 499 articles we found met the inclusion criteria and were abstracted. Five were conducted nationally. Sodium content determination methods included laboratory analysis (n = 15), point-of-purchase nutrition information or restaurants’ Web sites (n = 8), and menu analysis with a nutrient database (n = 3). There is no comprehensive data system that provides all information needed to monitor changes in sodium or other nutrients among restaurant foods. Combining information from different sources and methods may help inform a comprehensive system to monitor sodium content reduction efforts in the US food supply and to develop future strategies. PMID:23865701
Content Independence in Multimedia Databases.
ERIC Educational Resources Information Center
de Vries, Arjen P.
2001-01-01
Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-01-01
This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
2017-01-01
CII-B 2800 Powder Mill Road Adelphi, MD 20783-1138 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-7921 9. SPONSORING/MONITORING AGENCY NAME(S...server database, structured query language, information objects, instructions, maintenance , cursor on target events, unattended ground sensors...unlimited. iii Contents List of Figures iv 1. Introduction 1 2. Computer and Software Development Tools Requirements 1 3. Database Maintenance 2 3.1
BtoxDB: a comprehensive database of protein structural data on toxin-antitoxin systems.
Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Marchetto, Reinaldo
2015-03-01
Toxin-antitoxin (TA) systems are diverse and abundant genetic modules in prokaryotic cells that are typically formed by two genes encoding a stable toxin and a labile antitoxin. Because TA systems are able to repress growth or kill cells and are considered to be important actors in cell persistence (multidrug resistance without genetic change), these modules are considered potential targets for alternative drug design. In this scenario, structural information for the proteins in these systems is highly valuable. In this report, we describe the development of a web-based system, named BtoxDB, that stores all protein structural data on TA systems. The BtoxDB database was implemented as a MySQL relational database using PHP scripting language. Web interfaces were developed using HTML, CSS and JavaScript. The data were collected from the PDB, UniProt and Entrez databases. These data were appropriately filtered using specialized literature and our previous knowledge about toxin-antitoxin systems. The database provides three modules ("Search", "Browse" and "Statistics") that enable searches, acquisition of contents and access to statistical data. Direct links to matching external databases are also available. The compilation of all protein structural data on TA systems in one platform is highly useful for researchers interested in this content. BtoxDB is publicly available at http://www.gurupi.uft.edu.br/btoxdb. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hildebrand, Ainslie M; Iansavichus, Arthur V; Haynes, R Brian; Wilczynski, Nancy L; Mehta, Ravindra L; Parikh, Chirag R; Garg, Amit X
2014-04-01
We frequently fail to identify articles relevant to the subject of acute kidney injury (AKI) when searching the large bibliographic databases such as PubMed, Ovid Medline or Embase. To address this issue, we used computer automation to create information search filters to better identify articles relevant to AKI in these databases. We first manually reviewed a sample of 22 992 full-text articles and used prespecified criteria to determine whether each article contained AKI content or not. In the development phase (two-thirds of the sample), we developed and tested the performance of >1.3-million unique filters. Filters with high sensitivity and high specificity for the identification of AKI articles were then retested in the validation phase (remaining third of the sample). We succeeded in developing and validating high-performance AKI search filters for each bibliographic database with sensitivities and specificities in excess of 90%. Filters optimized for sensitivity reached at least 97.2% sensitivity, and filters optimized for specificity reached at least 99.5% specificity. The filters were complex; for example one PubMed filter included >140 terms used in combination, including 'acute kidney injury', 'tubular necrosis', 'azotemia' and 'ischemic injury'. In proof-of-concept searches, physicians found more articles relevant to topics in AKI with the use of the filters. PubMed, Ovid Medline and Embase can be filtered for articles relevant to AKI in a reliable manner. These high-performance information filters are now available online and can be used to better identify AKI content in large bibliographic databases.
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.
2009-12-01
Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.
2010-01-01
Background A plant-based diet protects against chronic oxidative stress-related diseases. Dietary plants contain variable chemical families and amounts of antioxidants. It has been hypothesized that plant antioxidants may contribute to the beneficial health effects of dietary plants. Our objective was to develop a comprehensive food database consisting of the total antioxidant content of typical foods as well as other dietary items such as traditional medicine plants, herbs and spices and dietary supplements. This database is intended for use in a wide range of nutritional research, from in vitro and cell and animal studies, to clinical trials and nutritional epidemiological studies. Methods We procured samples from countries worldwide and assayed the samples for their total antioxidant content using a modified version of the FRAP assay. Results and sample information (such as country of origin, product and/or brand name) were registered for each individual food sample and constitute the Antioxidant Food Table. Results The results demonstrate that there are several thousand-fold differences in antioxidant content of foods. Spices, herbs and supplements include the most antioxidant rich products in our study, some exceptionally high. Berries, fruits, nuts, chocolate, vegetables and products thereof constitute common foods and beverages with high antioxidant values. Conclusions This database is to our best knowledge the most comprehensive Antioxidant Food Database published and it shows that plant-based foods introduce significantly more antioxidants into human diet than non-plant foods. Because of the large variations observed between otherwise comparable food samples the study emphasizes the importance of using a comprehensive database combined with a detailed system for food registration in clinical and epidemiological studies. The present antioxidant database is therefore an essential research tool to further elucidate the potential health effects of phytochemical antioxidants in diet. PMID:20096093
Integration of HTML documents into an XML-based knowledge repository.
Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme
2005-01-01
The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.
Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Boucaud, Dwayne W.; Nabel, Michael; Eggers, Christian H.
2013-01-01
Developing scientific expertise in the classroom involves promoting higher-order cognitive skills as well as content mastery. Effective use of constructivism can facilitate these outcomes. However this is often difficult to accomplish when delivery of content is paramount. Utilizing many of the tenets of constructivist pedagogy, we have designed an Oxford-style debate assignment to be used in an introductory microbiology course. Two teams of students were assigned a debatable topic within microbiology. Over a five-week period students completed an informative web page consisting of three parts: background on the topic, data-based positions for each side of the argument, and a data-based persuasive argument to support their assigned position. This was followed by an in-class presentation and debate. Analysis of student performance on knowledge-based questions shows that students retain debate-derived content acquired primarily outside of lectures significantly better than content delivered during a normal lecture. Importantly, students who performed poorly on the lecture-derived questions did as well on debate-derived questions as other students. Students also performed well on questions requiring higher-order cognitive skills and in synthesizing data-driven arguments in support of a position during the debate. Student perceptions of their knowledge-base in areas covered by the debate and their skills in using scientific databases and analyzing primary literature showed a significant increase in pre- and postassignment comparisons. Our data demonstrate that an Oxford-style debate can be used effectively to deliver relevant content, increase higher-order cognitive skills, and increase self-efficacy in science-specific skills, all contributing to developing expertise in the field. PMID:23858349
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
Visualizing the semantic content of large text databases using text maps
NASA Technical Reports Server (NTRS)
Combs, Nathan
1993-01-01
A methodology for generating text map representations of the semantic content of text databases is presented. Text maps provide a graphical metaphor for conceptualizing and visualizing the contents and data interrelationships of large text databases. Described are a set of experiments conducted against the TIPSTER corpora of Wall Street Journal articles. These experiments provide an introduction to current work in the representation and visualization of documents by way of their semantic content.
Correlated Attack Modeling (CAM)
2003-10-01
describing attack models to a scenario recognition engine, a prototype of such an engine was developed, using components of the EMERALD intrusion...content. Results – The attacker gains information enabling remote access to database (i.e., privileged login information, database layout to allow...engine that uses attack specifications written in CAML. The implementation integrates two advanced technologies devel- oped in the EMERALD program [27, 31
Price, Curtis V.; Maupin, Molly A.
2014-01-01
The purpose of this report is to document the PSDB and explain the methods used to populate and update the data from the SDWIS, State datasets, and map and geospatial imagery. This report describes 3 data tables and 11 domain tables, including field contents, data sources, and relations between tables. Although the PSDB database is not available to the general public, this information should be useful for others who are developing other database systems to store and analyze public-supply system and facility data.
Tripal: a construction toolkit for online genome databases.
Ficklin, Stephen P; Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net.
Tripal: a construction toolkit for online genome databases
Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E.; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E.; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net PMID:21959868
Development of a database for the verification of trans-ionospheric remote sensing systems
NASA Astrophysics Data System (ADS)
Leitinger, R.
2005-08-01
Remote sensing systems need verification by means of in-situ data or by means of model data. In the case of ionospheric occultation inversion, ionosphere tomography and other imaging methods on the basis of satellite-to-ground or satellite-to-satellite electron content, the availability of in-situ data with adequate spatial and temporal co-location is a very rare case, indeed. Therefore the method of choice for verification is to produce artificial electron content data with realistic properties, subject these data to the inversion/retrieval method, compare the results with model data and apply a suitable type of “goodness of fit” classification. Inter-comparison of inversion/retrieval methods should be done with sets of artificial electron contents in a “blind” (or even “double blind”) way. The set up of a relevant database for the COST 271 Action is described. One part of the database will be made available to everyone interested in testing of inversion/retrieval methods. The artificial electron content data are calculated by means of large-scale models that are “modulated” in a realistic way to include smaller scale and dynamic structures, like troughs and traveling ionospheric disturbances.
Four Current Awareness Databases: Coverage and Currency Compared.
ERIC Educational Resources Information Center
Jaguszewski, Janice M.; Kempf, Jody L.
1995-01-01
Discusses the usability and content of the following table of contents (TOC) databases selected by science and engineering librarians at the University of Minnesota Twin Cities: Current Contents on Diskette (CCoD), CARL Uncover2, Inside Information, and Contents1st. (AEF)
Access to digital library databases in higher education: design problems and infrastructural gaps.
Oswal, Sushil K
2014-01-01
After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.
Reading in the Content Areas. Learning Package No. 4.
ERIC Educational Resources Information Center
Collins, Norma; Smith, Carl, Comp.
Originally developed for the Department of Defense Schools (DoDDS) system, this learning package on reading in the content areas is designed for teachers who wish to upgrade or expand their teaching skills on their own. The package includes a comprehensive search of the ERIC database; a lecture giving an overview on the topic; the full text of…
Embracing the Archives: How NPR Librarians Turned Their Collection into a Workflow Tool
ERIC Educational Resources Information Center
Sin, Lauren; Daugert, Katie
2013-01-01
Several years ago, National Public Radio (NPR) librarians began developing a new content management system (CMS). It was intended to offer desktop access for all NPR-produced content, including transcripts, audio, and metadata. Fast-forward to 2011, and their shiny, new database, Artemis, was ready for debut. Their next challenge: to teach a staff…
Integration of HTML Documents into an XML-Based Knowledge Repository
Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme
2005-01-01
The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler/viewer/editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG. PMID:16779384
WAIS Searching of the Current Contents Database
NASA Astrophysics Data System (ADS)
Banholzer, P.; Grabenstein, M. E.
The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.
TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University
Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy
2010-01-01
We developed, TNAURice: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx. PMID:21364829
TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University.
Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy
2010-11-27
WE DEVELOPED, TNAURICE: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx.
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
Changing Beginning Teachers' Content Knowledge and Its Effects on Student Learning
ERIC Educational Resources Information Center
Sinelnikov, Oleg A.; Kim, Insook; Ward, Phillip; Curtner-Smith, Mathew; Li, Weidong
2016-01-01
Background: Lack of content knowledge (CK) is problematic in teaching in classroom subject areas and in physical education. There is a dearth of data-based research on interventions aimed at helping teachers acquire CK and, in turn, on the effects of CK on student learning. Aim: To investigate the effect of professional development, in the form of…
FRS exposes several REST services that allows developers to utilize a live feed of data from the FRS database. This web page is intended for a technical audience and describes the content and purpose of each service available.
Tufts Health Sciences Database: lessons, issues, and opportunities.
Lee, Mary Y; Albright, Susan A; Alkasab, Tarik; Damassa, David A; Wang, Paul J; Eaton, Elizabeth K
2003-03-01
The authors present their seven-year experience with developing the Tufts Health Sciences Database (Tufts HSDB), a database-driven information management system that combines the strengths of a digital library, content delivery tools, and curriculum management. They describe a future where online tools will provide a health sciences learning infrastructure that fosters the work of an increasingly interdisciplinary community of learners and allows content to be shared across institutions as well as with academic and commercial information repositories. The authors note the key partners in Tufts HSDB's success--the close collaboration of the health sciences library, educational affairs, and information technology staff. Tufts HSDB moved quickly from serving the medical curriculum to supporting Tufts' veterinary, dental, biomedical sciences, and nutrition schools, thus leveraging Tufts HSDB research and development with university-wide efforts including Internet2 middleware, wireless access, information security, and digital libraries. The authors identify major effects on teaching and learning, e.g., what is better taught with multimedia, how faculty preparation and student learning time can be more efficient and effective, how content integration for interdisciplinary teaching and learning is promoted, and how continuous improvement methods can be integrated. Also addressed are issues of faculty development, copyright and intellectual property, budgetary concerns, and coordinating IT across schools and hospitals. The authors describe Tufts' recent experience with sharing its infrastructure with other schools, and welcome inquiries from those wishing to explore national and international partnerships to create a truly open and integrated infrastructure for education across the health sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Jennifer; Sandberg, Tami
The Wind-Wildlife Impacts Literature Database (WILD), formerly known as the Avian Literature Database, was created in 1997. The goal of the database was to begin tracking the research that detailed the potential impact of wind energy development on birds. The Avian Literature Database was originally housed on a proprietary platform called Livelink ECM from Open- Text and maintained by in-house technical staff. The initial set of records was added by library staff. A vital part of the newly launched Drupal-based WILD database is the Bibliography module. Many of the resources included in the database have digital object identifiers (DOI). Themore » bibliographic information for any item that has a DOI can be imported into the database using this module. This greatly reduces the amount of manual data entry required to add records to the database. The content available in WILD is international in scope, which can be easily discerned by looking at the tags available in the browse menu.« less
Human Variome Project Quality Assessment Criteria for Variation Databases.
Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter
2016-06-01
Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
Development Of New Databases For Tsunami Hazard Analysis In California
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.
2009-12-01
The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Wang, Yanli; Bryant, Stephen H.; Cheng, Tiejun; Wang, Jiyao; Gindulyte, Asta; Shoemaker, Benjamin A.; Thiessen, Paul A.; He, Siqian; Zhang, Jian
2017-01-01
PubChem's BioAssay database (https://pubchem.ncbi.nlm.nih.gov) has served as a public repository for small-molecule and RNAi screening data since 2004 providing open access of its data content to the community. PubChem accepts data submission from worldwide researchers at academia, industry and government agencies. PubChem also collaborates with other chemical biology database stakeholders with data exchange. With over a decade's development effort, it becomes an important information resource supporting drug discovery and chemical biology research. To facilitate data discovery, PubChem is integrated with all other databases at NCBI. In this work, we provide an update for the PubChem BioAssay database describing several recent development including added sources of research data, redesigned BioAssay record page, new BioAssay classification browser and new features in the Upload system facilitating data sharing. PMID:27899599
Field-Based Experiential Learning Using Mobile Devices
NASA Astrophysics Data System (ADS)
Hilley, G. E.
2015-12-01
Technologies such as GPS and cellular triangulation allow location-specific content to be delivered by mobile devices, but no mechanism currently exists to associate content shared between locations in a way that guarantees the delivery of coherent and non-redundant information at every location. Thus, experiential learning via mobile devices must currently take place along a predefined path, as in the case of a self-guided tour. I developed a mobile-device-based system that allows a person to move through a space along a path of their choosing, while receiving information in a way that guarantees delivery of appropriate background and location-specific information without producing redundancy of content between locations. This is accomplished by coupling content to knowledge-concept tags that are noted as fulfilled when users take prescribed actions. Similarly, the presentation of the content is related to the fulfillment of these knowledge-concept tags through logic statements that control the presentation. Content delivery is triggered by mobile-device geolocation including GPS/cellular navigation, and sensing of low-power Bluetooth proximity beacons. Together, these features implement a process that guarantees a coherent, non-redundant educational experience throughout a space, regardless of a learner's chosen path. The app that runs on the mobile device works in tandem with a server-side database and file-serving system that can be configured through a web-based GUI, and so content creators can easily populate and configure content with the system. Once the database has been updated, the new content is immediately available to the mobile devices when they arrive at the location at which content is required. Such a system serves as a platform for the development of field-based geoscience educational experiences, in which students can organically learn about core concepts at particular locations while individually exploring a space.
Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro
2013-01-01
The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
Chaspari, Theodora; Soldatos, Constantin; Maragos, Petros
2015-01-01
The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. The resulting database contains 696 recorded utterances in Greek language by 20 native speakers and has a total duration of approximately 28 min. Speech classification results yield accuracy up to 75.15% for automatically recognizing the emotions in AESI. These results indicate the usefulness of our approach for collecting emotional data with reliable content, balanced across classes and with reduced environmental variability.
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
ERIC Educational Resources Information Center
Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.
2011-01-01
Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…
Influencing Database Use in Public Libraries.
ERIC Educational Resources Information Center
Tenopir, Carol
1999-01-01
Discusses results of a survey of factors influencing database use in public libraries. Highlights the importance of content; ease of use; and importance of instruction. Tabulates importance indications for number and location of workstations, library hours, availability of remote login, usefulness and quality of content, lack of other databases,…
Manual for ERIC Awareness Workshop.
ERIC Educational Resources Information Center
Strohmenger, C. Todd; Lanham, Berma A.
This manual, designed to be used with a video tape, provides information for conducting a workshop to familiarize educators with the Educational Resources Information Center (ERIC). Objectives of the workshop include: (1) to develop an understanding of the contents and structure of the ERIC database; (2) to develop an understanding of ERIC as a…
Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV.
Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa; Bono, Hidemasa
2012-03-01
In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability.
Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV
Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa
2012-01-01
In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability. PMID:21803786
Interactive bibliographical database on color
NASA Astrophysics Data System (ADS)
Caivano, Jose L.
2002-06-01
The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Fei, Lin; Zhao, Jing; Leng, Jiahao; Zhang, Shujian
2017-10-12
The ALIPORC full-text database is targeted at a specific full-text database of acupuncture literature in the Republic of China. Starting in 2015, till now, the database has been getting completed, focusing on books relevant with acupuncture, articles and advertising documents, accomplished or published in the Republic of China. The construction of this database aims to achieve the source sharing of acupuncture medical literature in the Republic of China through the retrieval approaches to diversity and accurate content presentation, contributes to the exchange of scholars, reduces the paper damage caused by paging and simplify the retrieval of the rare literature. The writers have made the explanation of the database in light of sources, characteristics and current situation of construction; and have discussed on improving the efficiency and integrity of the database and deepening the development of acupuncture literature in the Republic of China.
The STEP database through the end-users eyes--USABILITY STUDY.
Salunke, Smita; Tuleu, Catherine
2015-08-15
The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.
The Orthology Ontology: development and applications.
Fernández-Breis, Jesualdo Tomás; Chiba, Hirokazu; Legaz-García, María Del Carmen; Uchiyama, Ikuo
2016-06-04
Computational comparative analysis of multiple genomes provides valuable opportunities to biomedical research. In particular, orthology analysis can play a central role in comparative genomics; it guides establishing evolutionary relations among genes of organisms and allows functional inference of gene products. However, the wide variations in current orthology databases necessitate the research toward the shareability of the content that is generated by different tools and stored in different structures. Exchanging the content with other research communities requires making the meaning of the content explicit. The need for a common ontology has led to the creation of the Orthology Ontology (ORTH) following the best practices in ontology construction. Here, we describe our model and major entities of the ontology that is implemented in the Web Ontology Language (OWL), followed by the assessment of the quality of the ontology and the application of the ORTH to existing orthology datasets. This shareable ontology enables the possibility to develop Linked Orthology Datasets and a meta-predictor of orthology through standardization for the representation of orthology databases. The ORTH is freely available in OWL format to all users at http://purl.org/net/orth . The Orthology Ontology can serve as a framework for the semantic standardization of orthology content and it will contribute to a better exploitation of orthology resources in biomedical research. The results demonstrate the feasibility of developing shareable datasets using this ontology. Further applications will maximize the usefulness of this ontology.
The Israeli National Genetic database: a 10-year experience.
Zlotogora, Joël; Patrinos, George P
2017-03-16
The Israeli National and Ethnic Mutation database ( http://server.goldenhelix.org/israeli ) was launched in September 2006 on the ETHNOS software to include clinically relevant genomic variants reported among Jewish and Arab Israeli patients. In 2016, the database was reviewed and corrected according to ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar ) and ExAC ( http://exac.broadinstitute.org ) database entries. The present article summarizes some key aspects from the development and continuous update of the database over a 10-year period, which could serve as a paradigm of successful database curation for other similar resources. In September 2016, there were 2444 entries in the database, 890 among Jews, 1376 among Israeli Arabs, and 178 entries among Palestinian Arabs, corresponding to an ~4× data content increase compared to when originally launched. While the Israeli Arab population is much smaller than the Jewish population, the number of pathogenic variants causing recessive disorders reported in the database is higher among Arabs (934) than among Jews (648). Nevertheless, the number of pathogenic variants classified as founder mutations in the database is smaller among Arabs (175) than among Jews (192). In 2016, the entire database content was compared to that of other databases such as ClinVar and ExAC. We show that a significant difference in the percentage of pathogenic variants from the Israeli genetic database that were present in ExAC was observed between the Jewish population (31.8%) and the Israeli Arab population (20.6%). The Israeli genetic database was launched in 2006 on the ETHNOS software and is available online ever since. It allows querying the database according to the disorder and the ethnicity; however, many other features are not available, in particular the possibility to search according to the name of the gene. In addition, due to the technical limitations of the previous ETHNOS software, new features and data are not included in the present online version of the database and upgrade is currently ongoing.
Ontological interpretation of biomedical database content.
Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan
2017-06-26
Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is addressed by a method that identifies implicit knowledge behind semantic annotations in biological databases and grounds it in an expressive upper-level ontology. The result is a seamless representation of database structure, content and annotations as OWL models.
The Steward Observatory asteroid relational database
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1992-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. A browse capability allows the user to explore the contents of any data file. SOARD offers, also, an asteroid bibliography containing about 13,000 references. The program has online help as well as user and programmer documentation manuals. SOARD continues to provide data to fulfill requests by members of the astronomical community and will continue to grow as data is added to the database and new features are added to the program.
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
Zerkin, V. V.; Pritychenko, B.
2018-02-04
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
NASA Astrophysics Data System (ADS)
Zerkin, V. V.; Pritychenko, B.
2018-04-01
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.
ESO telbib: Linking In and Reaching Out
NASA Astrophysics Data System (ADS)
Grothkopf, U.; Meakins, S.
2015-04-01
Measuring an observatory's research output is an integral part of its science operations. Like many other observatories, ESO tracks scholarly papers that use observational data from ESO facilities and uses state-of-the-art tools to create, maintain, and further develop the Telescope Bibliography database (telbib). While telbib started out as a stand-alone tool mostly used to compile lists of papers, it has by now developed into a multi-faceted, interlinked system. The core of the telbib database is links between scientific papers and observational data generated by the La Silla Paranal Observatory residing in the ESO archive. This functionality has also been deployed for ALMA data. In addition, telbib reaches out to several other systems, including ESO press releases, the NASA ADS Abstract Service, databases at the CDS Strasbourg, and impact scores at Altmetric.com. We illustrate these features to show how the interconnected telbib system enhances the content of the database as well as the user experience.
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerkin, V. V.; Pritychenko, B.
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
Frank, M S; Dreyer, K
2001-06-01
We describe a working software technology that enables educators to incorporate their expertise and teaching style into highly interactive and Socratic educational material for distribution on the world wide web. A graphically oriented interactive authoring system was developed to enable the computer novice to create and store within a database his or her domain expertise in the form of electronic knowledge. The authoring system supports and facilitates the input and integration of several types of content, including free-form, stylized text, miniature and full-sized images, audio, and interactive questions with immediate feedback. The system enables the choreography and sequencing of these entities for display within a web page as well as the sequencing of entire web pages within a case-based or thematic presentation. Images or segments of text can be hyperlinked with point-and-click to other entities such as adjunctive web pages, audio, or other images, cases, or electronic chapters. Miniature (thumbnail) images are automatically linked to their full-sized counterparts. The authoring system contains a graphically oriented word processor, an image editor, and capabilities to automatically invoke and use external image-editing software such as Photoshop. The system works in both local area network (LAN) and internet-centric environments. An internal metalanguage (invisible to the author but stored with the content) was invented to represent the choreographic directives that specify the interactive delivery of the content on the world wide web. A database schema was developed to objectify and store both this electronic knowledge and its associated choreographic metalanguage. A database engine was combined with page-rendering algorithms in order to retrieve content from the database and deliver it on the web in a Socratic style, assess the recipient's current fund of knowledge, and provide immediate feedback, thus stimulating in-person interaction with a human expert. This technology enables the educator to choreograph a stylized, interactive delivery of his or her message using multimedia components assembled in virtually any order, spanning any number of web pages for a given case or theme. An educator can thus exercise precise influence on specific learning objectives, embody his or her personal teaching style within the content, and ultimately enhance its educational impact. The described technology amplifies the efforts of the educator and provides a more dynamic and enriching learning environment for web-based education.
Curating and Preserving the Big Canopy Database System: an Active Curation Approach using SEAD
NASA Astrophysics Data System (ADS)
Myers, J.; Cushing, J. B.; Lynn, P.; Weiner, N.; Ovchinnikova, A.; Nadkarni, N.; McIntosh, A.
2015-12-01
Modern research is increasingly dependent upon highly heterogeneous data and on the associated cyberinfrastructure developed to organize, analyze, and visualize that data. However, due to the complexity and custom nature of such combined data-software systems, it can be very challenging to curate and preserve them for the long term at reasonable cost and in a way that retains their scientific value. In this presentation, we describe how this challenge was met in preserving the Big Canopy Database (CanopyDB) system using an agile approach and leveraging the Sustainable Environment - Actionable Data (SEAD) DataNet project's hosted data services. The CanopyDB system was developed over more than a decade at Evergreen State College to address the needs of forest canopy researchers. It is an early yet sophisticated exemplar of the type of system that has become common in biological research and science in general, including multiple relational databases for different experiments, a custom database generation tool used to create them, an image repository, and desktop and web tools to access, analyze, and visualize this data. SEAD provides secure project spaces with a semantic content abstraction (typed content with arbitrary RDF metadata statements and relationships to other content), combined with a standards-based curation and publication pipeline resulting in packaged research objects with Digital Object Identifiers. Using SEAD, our cross-project team was able to incrementally ingest CanopyDB components (images, datasets, software source code, documentation, executables, and virtualized services) and to iteratively define and extend the metadata and relationships needed to document them. We believe that both the process, and the richness of the resultant standards-based (OAI-ORE) preservation object, hold lessons for the development of best-practice solutions for preserving scientific data in association with the tools and services needed to derive value from it.
The rules of the game: properties of a database of expository language samples.
Heilmann, John; Malone, Thomas O
2014-10-01
The authors created a database of expository oral language samples with the aims of describing the nature of students' expository discourse and providing benchmark data for typically developing preteen and teenage students. Using a favorite game or sport protocol, language samples were collected from 235 typically developing students in Grades 5, 6, 7, and 9. Twelve language measures were summarized from this database and analyses were completed to test for differences across ages and topics. To determine whether distinct dimensions of oral language could be captured with language measures from these expository samples, a factor analysis was completed. Modest differences were observed in language measures across ages and topics. The language measures were effectively classified into four distinct dimensions: syntactic complexity, expository content, discourse difficulties, and lexical diversity. Analysis of expository data provides a functional and curriculum-based assessment that has the potential to allow clinicians to document multiple dimensions of children's expressive language skills. Further development and testing of the database will establish the feasibility of using it to compare individual students' expository discourse skills to those of their typically developing peers.
The crustal dynamics intelligent user interface anthology
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.
Pérez-Jiménez, J; Neveu, V; Vos, F; Scalbert, A
2010-11-01
The diversity of the chemical structures of dietary polyphenols makes it difficult to estimate their total content in foods, and also to understand the role of polyphenols in health and the prevention of diseases. Global redox colorimetric assays have commonly been used to estimate the total polyphenol content in foods. However, these assays lack specificity. Contents of individual polyphenols have been determined by chromatography. These data, scattered in several hundred publications, have been compiled in the Phenol-Explorer database. The aim of this paper is to identify the 100 richest dietary sources of polyphenols using this database. Advanced queries in the Phenol-Explorer database (www.phenol-explorer.eu) allowed retrieval of information on the content of 502 polyphenol glycosides, esters and aglycones in 452 foods. Total polyphenol content was calculated as the sum of the contents of all individual polyphenols. These content values were compared with the content of antioxidants estimated using the Folin assay method in the same foods. These values were also extracted from the same database. Amounts per serving were calculated using common serving sizes. A list of the 100 richest dietary sources of polyphenols was produced, with contents varying from 15,000 mg per 100 g in cloves to 10 mg per 100 ml in rosé wine. The richest sources were various spices and dried herbs, cocoa products, some darkly coloured berries, some seeds (flaxseed) and nuts (chestnut, hazelnut) and some vegetables, including olive and globe artichoke heads. A list of the 89 foods and beverages providing more than 1 mg of total polyphenols per serving was established. A comparison of total polyphenol contents with antioxidant contents, as determined by the Folin assay, also showed that Folin values systematically exceed the total polyphenol content values. The comprehensive Phenol-Explorer data were used for the first time to identify the richest dietary sources of polyphenols and the foods contributing most significantly to polyphenol intake as inferred from their content per serving.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-01-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
United states national land cover data base development? 1992-2001 and beyond
Yang, L.
2008-01-01
An accurate, up-to-date and spatially-explicate national land cover database is required for monitoring the status and trends of the nation's terrestrial ecosystem, and for managing and conserving land resources at the national scale. With all the challenges and resources required to develop such a database, an innovative and scientifically sound planning must be in place and a partnership be formed among users from government agencies, research institutes and private sectors. In this paper, we summarize major scientific and technical issues regarding the development of the NLCD 1992 and 2001. Experiences and lessons learned from the project are documented with regard to project design, technical approaches, accuracy assessment strategy, and projecti imiplementation.Future improvements in developing next generation NLCD beyond 2001 are suggested, including: 1) enhanced satellite data preprocessing in correction of atmospheric and adjacency effect and the topographic normalization; 2) improved classification accuracy through comprehensive and consistent training data and new algorithm development; 3) multi-resolution and multi-temporal database targeting major land cover changes and land cover database updates; 4) enriched database contents by including additional biophysical parameters and/or more detailed land cover classes through synergizing multi-sensor, multi-temporal, and multi-spectral satellite data and ancillary data, and 5) transform the NLCD project into a national land cover monitoring program. ?? 2008 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rayl, K.D.; Gaasterland, T.
This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more completemore » ``user guides`` are included, along with general information on where to search for a new database.« less
A rudimentary database for three-dimensional objects using structural representation
NASA Technical Reports Server (NTRS)
Sowers, James P.
1987-01-01
A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.
NASA Astrophysics Data System (ADS)
Estima, Jacinto Paulo Simoes
Traditional geographic information has been produced by mapping agencies and corporations, using high skilled people as well as expensive precision equipment and procedures, in a very costly approach. The production of land use and land cover databases are just one example of such traditional approach. On the other side, The amount of Geographic Information created and shared by citizens through the Web has been increasing exponentially during the last decade, resulting from the emergence and popularization of technologies such as the Web 2.0, cloud computing, GPS, smart phones, among others. Such comprehensive amount of free geographic data might have valuable information to extract and thus opening great possibilities to improve significantly the production of land use and land cover databases. In this thesis we explored the feasibility of using geographic data from different user generated spatial content initiatives in the process of land use and land cover database production. Data from Panoramio, Flickr and OpenStreetMap were explored in terms of their spatial and temporal distribution, and their distribution over the different land use and land cover classes. We then proposed a conceptual model to integrate data from suitable user generated spatial content initiatives based on identified dissimilarities among a comprehensive list of initiatives. Finally we developed a prototype implementing the proposed integration model, which was then validated by using the prototype to solve four identified use cases. We concluded that data from user generated spatial content initiatives has great value but should be integrated to increase their potential. The possibility of integrating data from such initiatives in an integration model was proved. Using the developed prototype, the relevance of the integration model was also demonstrated for different use cases. None None None
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
Odense Pharmacoepidemiological Database: A Review of Use and Content.
Hallas, Jesper; Hellfritzsch, Maja; Rix, Morten; Olesen, Morten; Reilev, Mette; Pottegård, Anton
2017-05-01
The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active and thereby has more than 25 years of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health-related registers in Denmark. Among its research uses, we review record linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive sources of prescription data in Denmark, OPED may still play a role as in certain data-intensive regional studies. © 2017 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases
Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert
2010-01-01
Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820
Multimodality medical image database for temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-05-01
This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
EDCs DataBank: 3D-Structure database of endocrine disrupting chemicals.
Montes-Grajales, Diana; Olivero-Verbel, Jesus
2015-01-02
Endocrine disrupting chemicals (EDCs) are a group of compounds that affect the endocrine system, frequently found in everyday products and epidemiologically associated with several diseases. The purpose of this work was to develop EDCs DataBank, the only database of EDCs with three-dimensional structures. This database was built on MySQL using the EU list of potential endocrine disruptors and TEDX list. It contains the three-dimensional structures available on PubChem, as well as a wide variety of information from different databases and text mining tools, useful for almost any kind of research regarding EDCs. The web platform was developed employing HTML, CSS and PHP languages, with dynamic contents in a graphic environment, facilitating information analysis. Currently EDCs DataBank has 615 molecules, including pesticides, natural and industrial products, cosmetics, drugs and food additives, among other low molecular weight xenobiotics. Therefore, this database can be used to study the toxicological effects of these molecules, or to develop pharmaceuticals targeting hormone receptors, through docking studies, high-throughput virtual screening and ligand-protein interaction analysis. EDCs DataBank is totally user-friendly and the 3D-structures of the molecules can be downloaded in several formats. This database is freely available at http://edcs.unicartagena.edu.co. Copyright © 2014. Published by Elsevier Ireland Ltd.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-08-08
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-01-01
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957
Automating the training development process for mission flight operations
NASA Technical Reports Server (NTRS)
Scott, Carol J.
1994-01-01
Traditional methods of developing training do not effectively support the changing needs of operational users in a multimission environment. The Automated Training Development System (ATDS) provides advantages over conventional methods in quality, quantity, turnaround, database maintenance, and focus on individualized instruction. The Operations System Training Group at the JPL performed a six-month study to assess the potential of ATDS to automate curriculum development and to generate and maintain course materials. To begin the study, the group acquired readily available hardware and participated in a two-week training session to introduce the process. ATDS is a building activity that combines training's traditional information-gathering with a hierarchical method for interleaving the elements. The program can be described fairly simply. A comprehensive list of candidate tasks determines the content of the database; from that database, selected critical tasks dictate which competencies of skill and knowledge to include in course material for the target audience. The training developer adds pertinent planning information about each task to the database, then ATDS generates a tailored set of instructional material, based on the specific set of selection criteria. Course material consistently leads students to a prescribed level of competency.
Comparison of flavonoid intake assessment methods.
Ivey, Kerry L; Croft, Kevin; Prince, Richard L; Hodgson, Jonathan M
2016-09-14
Flavonoids are a diverse group of polyphenolic compounds found in high concentrations in many plant foods and beverages. High flavonoid intake has been associated with reduced risk of chronic disease. To date, population based studies have used the United States Department of Agriculture (USDA) food content database to determine habitual flavonoid intake. More recently, a new flavonoid food content database, Phenol-Explorer (PE), has been developed. However, the level of agreement between the two databases is yet to be explored. To compare the methods used to create each database, and to explore the level of agreement between the flavonoid intake estimates derived from USDA and PE data. The study population included 1063 randomly selected women aged over 75 years. Two separate intake estimates were determined using food composition data from the USDA and the PE databases. There were many similarities in methods used to create each database; however, there are several methodological differences that manifest themselves in differences in flavonoid intake estimates between the 2 databases. Despite differences in net estimates, there was a strong level of agreement between total-flavonoid, flavanol, flavanone and anthocyanidin intake estimates derived from each database. Intake estimates for flavanol monomers showed greater agreement than flavanol polymers. The level of agreement between the two databases was the weakest for the flavonol and flavone intake estimates. In this population, the application of USDA and PE source data yielded highly correlated intake estimates for total-flavonoids, flavanols, flavanones and anthocyanidins. For these sub-classes, the USDA and PE databases may be used interchangeably in epidemiological investigations. There was poorer correlation between intake estimates for flavonols and flavones due to differences in USDA and PE methodologies. Individual flavonoid compound groups that comprise flavonoid sub-classes had varying levels of agreement. As such, when determining the appropriate database to calculate flavonoid intake variables, it is important to consider methodologies underpinning database creation and which foods are important contributors to dietary intake in the population of interest.
Rothwell, Joseph A.; Perez-Jimenez, Jara; Neveu, Vanessa; Medina-Remón, Alexander; M'Hiri, Nouha; García-Lobato, Paula; Manach, Claudine; Knox, Craig; Eisner, Roman; Wishart, David S.; Scalbert, Augustin
2013-01-01
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys. Database URL: http://www.phenol-explorer.eu PMID:24103452
NASA Astrophysics Data System (ADS)
Diaz-Merced, Wanda Liz; Casado, Johanna; Garcia, Beatriz; Aarnio, Alicia; Knierman, Karen; Monkiewicz, Jacqueline; Alicia Aarnio.
2018-01-01
Big Data" is a subject that has taken special relevance today, particularly in Astrophysics, where continuous advances in technology are leading to ever larger data sets. A multimodal approach in perception of astronomical data data (achieved through sonification used for the processing of data) increases the detection of signals in very low signal-to-noise ratio limits and is of special importance to achieve greater inclusion in the field of Astronomy. In the last ten years, different software tools have been developed that perform the sonification of astronomical data from tables or databases, among them the best known and in multiplatform development are Sonification Sandbox, MathTrack, and xSonify.In order to determine the accessibility of software we propose to start carrying out a conformity analysis of ISO (International Standard Organization) 9241-171171: 2008. This standard establishes the general guidelines that must be taken into account for accessibility in software design, and it is applied to software used in work, public places, and at home. To analyze the accessibility of web databases, we take into account the "Web Content Content Accessibility Guidelines (WCAG) 2.0", accepted and published by ISO in the ISO / IEC 40500: 2012 standard.In this poster, we present a User Centered Design (UCD), Human Computer Interaction (HCI), and User Experience (UX) framework to address a non-segregational provision of access to bibliographic databases and telemetry databases in Astronomy. Our framework is based on an ISO evaluation on a selection of data bases such as ADS, Simbad and SDSS. The WCAG 2.0 and ISO 9241-171171: 2008 should not be taken as absolute accessibility standards: these guidelines are very general, are not absolute, and do not address particularities. They are not to be taken as a substitute for UCD, HCI, UX design and evaluation. Based on our results, this research presents the framework for a focus group and qualitative data analysis aimed to lay the foundations for the employment of UCD functionalities on astronomical databases.
Preclinical tests of an android based dietary logging application.
Kósa, István; Vassányi, István; Pintér, Balázs; Nemes, Márta; Kámánné, Krisztina; Kohut, László
2014-01-01
The paper describes the first, preclinical evaluation of a dietary logging application developed at the University of Pannonia, Hungary. The mobile user interface is briefly introduced. The three evaluation phases examined the completeness and contents of the dietary database and the time expenditure of the mobile based diet logging procedure. The results show that although there are substantial individual differences between various dietary databases, the expectable difference with respect to nutrient contents is below 10% on typical institutional menu list. Another important finding is that the time needed to record the meals can be reduced to about 3 minutes daily especially if the user uses set-based search. a well designed user interface on a mobile device is a viable and reliable way for a personalized lifestyle support service.
Garrido-Martín, Diego; Pazos, Florencio
2018-02-27
The exponential accumulation of new sequences in public databases is expected to improve the performance of all the approaches for predicting protein structural and functional features. Nevertheless, this was never assessed or quantified for some widely used methodologies, such as those aimed at detecting functional sites and functional subfamilies in protein multiple sequence alignments. Using raw protein sequences as only input, these approaches can detect fully conserved positions, as well as those with a family-dependent conservation pattern. Both types of residues are routinely used as predictors of functional sites and, consequently, understanding how the sequence content of the databases affects them is relevant and timely. In this work we evaluate how the growth and change with time in the content of sequence databases affect five sequence-based approaches for detecting functional sites and subfamilies. We do that by recreating historical versions of the multiple sequence alignments that would have been obtained in the past based on the database contents at different time points, covering a period of 20 years. Applying the methods to these historical alignments allows quantifying the temporal variation in their performance. Our results show that the number of families to which these methods can be applied sharply increases with time, while their ability to detect potentially functional residues remains almost constant. These results are informative for the methods' developers and final users, and may have implications in the design of new sequencing initiatives.
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
NASA Astrophysics Data System (ADS)
Shanafield, Harold; Shamblin, Stephanie; Devarakonda, Ranjeet; McMurry, Ben; Walker Beaty, Tammy; Wilson, Bruce; Cook, Robert B.
2011-02-01
The FLUXNET global network of regional flux tower networks serves to coordinate the regional and global analysis of eddy covariance based CO2, water vapor and energy flux measurements taken at more than 500 sites in continuous long-term operation. The FLUXNET database presently contains information about the location, characteristics, and data availability of each of these sites. To facilitate the coordination and distribution of this information, we redesigned the underlying database and associated web site. We chose the PostgreSQL database as a platform based on its performance, stability and GIS extensions. PostreSQL allows us to enhance our search and presentation capabilities, which will in turn provide increased functionality for users seeking to understand the FLUXNET data. The redesigned database will also significantly decrease the burden of managing such highly varied data. The website is being developed using the Drupal content management system, which provides many community-developed modules and a robust framework for custom feature development. In parallel, we are working with the regional networks to ensure that the information in the FLUXNET database is identical to that in the regional networks. Going forward, we also plan to develop an automated way to synchronize information with the regional networks.
Development and evaluation of a dynamic web-based application.
Hsieh, Yichuan; Brennan, Patricia Flatley
2007-10-11
Traditional consumer health informatics (CHI) applications that were developed for lay public on the Web were commonly written in a Hypertext Markup Language (HTML). As genetics knowledge rapidly advances and requires updating information in a timely fashion, a different content structure is therefore needed to facilitate information delivery. This poster will present the process of developing a dynamic database-driven Web CHI application.
The Biomolecular Crystallization Database Version 4: expanded content and new features.
Tung, Michael; Gallagher, D Travis
2009-01-01
The Biological Macromolecular Crystallization Database (BMCD) has been a publicly available resource since 1988, providing a curated archive of information on crystal growth for proteins and other biological macromolecules. The BMCD content has recently been expanded to include 14 372 crystal entries. The resource continues to be freely available at http://xpdb.nist.gov:8060/BMCD4. In addition, the software has been adapted to support the Java-based Lucene query language, enabling detailed searching over specific parameters, and explicit search of parameter ranges is offered for five numeric variables. Extensive tools have been developed for import and handling of data from the RCSB Protein Data Bank. The updated BMCD is called version 4.02 or BMCD4. BMCD4 entries have been expanded to include macromolecule sequence, enabling more elaborate analysis of relations among protein properties, crystal-growth conditions and the geometric and diffraction properties of the crystals. The BMCD version 4.02 contains greatly expanded content and enhanced search capabilities to facilitate scientific analysis and design of crystal-growth strategies.
Basner, Jodi E; Theisz, Katrina I; Jensen, Unni S; Jones, C David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G; Franca-Koh, Jonathan; Hanlon, Sean E; Kuhn, Nastaran Z; Nagahara, Larry A; Schnell, Joshua D; Moore, Nicole M
2013-12-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences-Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program.
Li, Min; Dong, Xiang-yu; Liang, Hao; Leng, Li; Zhang, Hui; Wang, Shou-zhi; Li, Hui; Du, Zhi-Qiang
2017-05-20
Effective management and analysis of precisely recorded phenotypic traits are important components of the selection and breeding of superior livestocks. Over two decades, we divergently selected chicken lines for abdominal fat content at Northeast Agricultural University (Northeast Agricultural University High and Low Fat, NEAUHLF), and collected large volume of phenotypic data related to the investigation on molecular genetic basis of adipose tissue deposition in broilers. To effectively and systematically store, manage and analyze phenotypic data, we built the NEAUHLF Phenome Database (NEAUHLFPD). NEAUHLFPD included the following phenotypic records: pedigree (generations 1-19) and 29 phenotypes, such as body sizes and weights, carcass traits and their corresponding rates. The design and construction strategy of NEAUHLFPD were executed as follows: (1) Framework design. We used Apache as our web server, MySQL and Navicat as database management tools, and PHP as the HTML-embedded language to create dynamic interactive website. (2) Structural components. On the main interface, detailed introduction on the composition, function, and the index buttons of the basic structure of the database could be found. The functional modules of NEAUHLFPD had two main components: the first module referred to the physical storage space for phenotypic data, in which functional manipulation on data can be realized, such as data indexing, filtering, range-setting, searching, etc.; the second module related to the calculation of basic descriptive statistics, where data filtered from the database can be used for the computation of basic statistical parameters and the simultaneous conditional sorting. NEAUHLFPD could be used to effectively store and manage not only phenotypic, but also genotypic and genomics data, which can facilitate further investigation on the molecular genetic basis of chicken adipose tissue growth and development, and expedite the selection and breeding of broilers with low fat content.
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
ERIC Educational Resources Information Center
Sievert, MaryEllen C.; Verbeck, Alison F.
1984-01-01
This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)
NASA Astrophysics Data System (ADS)
Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.
2005-04-01
The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.
Legacy2Drupal: Conversion of an existing relational oceanographic database to a Drupal 7 CMS
NASA Astrophysics Data System (ADS)
Work, T. T.; Maffei, A. R.; Chandler, C. L.; Groman, R. C.
2011-12-01
Content Management Systems (CMSs) such as Drupal provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have already designed and implemented customized schemas for their metadata. The NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) has ported an existing relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal 7 Content Management System. This is an update on an effort described as a proof-of-concept in poster IN21B-1051, presented at AGU2009. The BCO-DMO project has translated all the existing database tables, input forms, website reports, and other features present in the existing system into Drupal CMS features. The replacement features are made possible by the use of Drupal content types, CCK node-reference fields, a custom theme, and a number of other supporting modules. This presentation describes the process used to migrate content in the original BCO-DMO metadata database to Drupal 7, some problems encountered during migration, and the modules used to migrate the content successfully. Strategic use of Drupal 7 CMS features that enable three separate but complementary interfaces to provide access to oceanographic research metadata will also be covered: 1) a Drupal 7-powered user front-end; 2) REST-ful JSON web services (providing a Mapserver interface to the metadata and data; and 3) a SPARQL interface to a semantic representation of the repository metadata (this feeding a new faceted search capability currently under development). The existing BCO-DMO ontology, developed in collaboration with Rensselaer Polytechnic Institute's Tetherless World Constellation, makes strategic use of pre-existing ontologies and will be used to drive semantically-enabled faceted search capabilities planned for the site. At this point, the use of semantic technologies included in the Drupal 7 core is anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 7 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO and other science data repositories for interoperability between systems that serve ecosystem research data.
Content and Design Features of Academic Health Sciences Libraries' Home Pages.
McConnaughy, Rozalynd P; Wilson, Steven P
2018-01-01
The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.
Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma
2014-02-01
Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.
A Content Markup Language for Data Services
NASA Astrophysics Data System (ADS)
Noviello, C.; Acampa, P.; Mango Furnari, M.
Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
NASA Astrophysics Data System (ADS)
Trumpy, Eugenio; Manzella, Adele
2017-02-01
The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.
1993-01-01
H. Wegner for developing the tactical air and ground force databases and producing the campaign results. Thanks are also due to Group Captain Michael ... Jackson , RAF, for developing the evaluation criteria for NATO’s tactical air force reductions during his stay at RAND. -xi. CONTENTS PREFACE
From Chaos to Content: An Integrated Approach to Government Web Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demuth, Nora H.; Knudson, Christa K.
2005-01-03
The web development team of the Environmental Technology Directorate (ETD) at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) redesigned the ETD website as a database-driven system, powered by the newly designed ETD Common Information System (ETD-CIS). The ETD website was redesigned in response to an analysis that showed the previous ETD websites were inefficient, costly, and lacking in a consistent focus. Redesigned and newly created websites based on a new ETD template provide a consistent image, meet or exceed accessibility standards, and are linked through a common database. The protocols used in developing the ETD website supportmore » integration of further organizational sites and facilitate internal use by staff and training on ETD website development and maintenance. Other PNNL organizations have approached the ETD web development team with an interest in applying the methods established by the ETD system. The ETD system protocol could potentially be used by other DOE laboratories to improve their website efficiency and content focus. “The tools by which we share science information must be as extraordinary as the information itself.[ ]” – DOE Science Director Raymond Orbach« less
Technology and Its Use in Education: Present Roles and Future Prospects
ERIC Educational Resources Information Center
Courville, Keith
2011-01-01
(Purpose) This article describes two current trends in Educational Technology: distributed learning and electronic databases. (Findings) Topics addressed in this paper include: (1) distributed learning as a means of professional development; (2) distributed learning for content visualization; (3) usage of distributed learning for educational…
NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes . Skip Theberge (NOAA Central Library) -- Collection development, site content, image digitization, and database construction. Kristin Ward (NOAA Central Library) -- HTML page construction Without the generosity
Managing Tradeoffs in the Electronic Age.
ERIC Educational Resources Information Center
Wagner, A. Ben
2003-01-01
Provides an overview of the development of electronic resources over the past three decades, discussing key features, disadvantages, and benefits of traditional online databases and CD-ROM and Web-based resources. Considers the decision to shift collections and resources toward purely digital formats, ownership of content, licensing, and user…
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.
Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases
Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125
JPRS Report, Science & Technology, China: Energy
1989-06-26
certain areas such as modular HTGR technology. In nuclear power develop - ment we currently face both challenges and opportunities, both risks and...22161 JmC QUALITY EJSPSÜSED 3 Science & Technology China: Energy JPRS-CEN-89-006 CONTENTS 26 June 1989 NATIONAL DEVELOPMENTS No Easy Solution Seen...Be Developed [XINHUA, 16 May 89] 21 National Oil Firm Sets 5-Year Goals [CEI Database, 9 May 89] , 21 Zhongyuan Oil Field Is Among Fastest
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
Smith, Constance M; Finger, Jacqueline H; Kadin, James A; Richardson, Joel E; Ringwald, Martin
2014-10-01
Because molecular mechanisms of development are extraordinarily complex, the understanding of these processes requires the integration of pertinent research data. Using the Gene Expression Database for Mouse Development (GXD) as an example, we illustrate the progress made toward this goal, and discuss relevant issues that apply to developmental databases and developmental research in general. Since its first release in 1998, GXD has served the scientific community by integrating multiple types of expression data from publications and electronic submissions and by making these data freely and widely available. Focusing on endogenous gene expression in wild-type and mutant mice and covering data from RNA in situ hybridization, in situ reporter (knock-in), immunohistochemistry, reverse transcriptase-polymerase chain reaction, Northern blot, and Western blot experiments, the database has grown tremendously over the years in terms of data content and search utilities. Currently, GXD includes over 1.4 million annotated expression results and over 260,000 images. All these data and images are readily accessible to many types of database searches. Here we describe the data and search tools of GXD; explain how to use the database most effectively; discuss how we acquire, curate, and integrate developmental expression information; and describe how the research community can help in this process. Copyright © 2014 The Authors Developmental Dynamics published by Wiley Periodicals, Inc. on behalf of American Association of Anatomists.
Ng, Curtise K C; White, Peter; McKay, Janice C
2009-04-01
Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.
Rothwell, Joseph A; Perez-Jimenez, Jara; Neveu, Vanessa; Medina-Remón, Alexander; M'hiri, Nouha; García-Lobato, Paula; Manach, Claudine; Knox, Craig; Eisner, Roman; Wishart, David S; Scalbert, Augustin
2013-01-01
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys.
Managing Contention and Timing Constraints in a Real-Time Database System
1995-01-01
In order to realize many of these goals, StarBase is constructed on top of RT-Mach, a real - time operating system developed at Carnegie Mellon...University [ll]. StarBase differs from previous RT-DBMS work [l, 2, 31 in that a) it relies on a real - time operating system which provides priority...CPU and resource scheduling pro- vided by tlhe underlying real - time operating system . Issues of data contention are dealt with by use of a priority
CD-ROM End-User Instruction: A Planning Model.
ERIC Educational Resources Information Center
Johnson, Mary E.; Rosen, Barbara S.
1990-01-01
Discusses methods and content of library instruction for CD-ROM searching in terms of the needs of end-users. Instructional methods explored include staff instruction, structured instruction, database documentation, tutorials and help screens, and floaters. Suggestions for effective instruction in transfer of skills, database content, database…
Record linkage for pharmacoepidemiological studies in cancer patients.
Herk-Sukel, Myrthe P P van; Lemmens, Valery E P P; Poll-Franse, Lonneke V van de; Herings, Ron M C; Coebergh, Jan Willem W
2012-01-01
An increasing need has developed for the post-approval surveillance of (new) anti-cancer drugs by means of pharmacoepidemiology and outcomes research in the area of oncology. To create an overview that makes researchers aware of the available database linkages in Northern America and Europe which facilitate pharmacoepidemiology and outcomes research in cancer patients. In addition to our own database, i.e. the Eindhoven Cancer Registry (ECR) linked to the PHARMO Record Linkage System, we considered database linkages between a population-based cancer registry and an administrative healthcare database that at least contains information on drug use and offers a longitudinal perspective on healthcare utilization. Eligible database linkages were limited to those that had been used in multiple published articles in English language included in Pubmed. The HMO Cancer Research Network (CRN) in the US was excluded from this review, as an overview of the linked databases participating in the CRN is already provided elsewhere. Researchers who had worked with the data resources included in our review were contacted for additional information and verification of the data presented in the overview. The following database linkages were included: the Surveillance, Epidemiology, and End-Results-Medicare; cancer registry data linked to Medicaid; Canadian cancer registries linked to population-based drug databases; the Scottish cancer registry linked to the Tayside drug dispensing data; linked databases in the Nordic Countries of Europe: Norway, Sweden, Finland and Denmark; and the ECR-PHARMO linkage in the Netherlands. Descriptives of the included database linkages comprise population size, generalizability of the population, year of first data availability, contents of the cancer registry, contents of the administrative healthcare database, the possibility to select a cancer-free control cohort, and linkage to other healthcare databases. The linked databases offer a longitudinal perspective, allowing for observations of health care utilization before, during, and after cancer diagnosis. They create new powerful data resources for the monitoring of post-approval drug utilization, as well as a framework to explore the (cost-)effectiveness of new, often expensive, anti-cancer drugs as used in everyday practice. Copyright © 2011 John Wiley & Sons, Ltd.
Wickham, James; Riitters, Kurt; Vogt, Peter; Costanza, Jennifer; Neale, Anne
2017-11-01
Landscape context is an important factor in restoration ecology, but the use of landscape context for site prioritization has not been as fully developed. We used morphological image processing to identify candidate ecological restoration areas based on their proximity to existing natural vegetation. We identified 1,102,720 candidate ecological restoration areas across the continental United States. Candidate ecological restoration areas were concentrated in the Great Plains and eastern United States. We populated the database of candidate ecological restoration areas with 17 attributes related to site content and context, including factors such as soil fertility and roads (site content), and number and area of potentially conjoined vegetated regions (site context) to facilitate its use for site prioritization. We demonstrate the utility of the database in the state of North Carolina, U.S.A. for a restoration objective related to restoration of water quality (mandated by the U.S. Clean Water Act), wetlands, and forest. The database will be made publicly available on the U.S. Environmental Protection Agency's EnviroAtlas website (http://enviroatlas.epa.gov) for stakeholders interested in ecological restoration.
Wickham, James; Riitters, Kurt; Vogt, Peter; Costanza, Jennifer; Neale, Anne
2018-01-01
Landscape context is an important factor in restoration ecology, but the use of landscape context for site prioritization has not been as fully developed. We used morphological image processing to identify candidate ecological restoration areas based on their proximity to existing natural vegetation. We identified 1,102,720 candidate ecological restoration areas across the continental United States. Candidate ecological restoration areas were concentrated in the Great Plains and eastern United States. We populated the database of candidate ecological restoration areas with 17 attributes related to site content and context, including factors such as soil fertility and roads (site content), and number and area of potentially conjoined vegetated regions (site context) to facilitate its use for site prioritization. We demonstrate the utility of the database in the state of North Carolina, U.S.A. for a restoration objective related to restoration of water quality (mandated by the U.S. Clean Water Act), wetlands, and forest. The database will be made publicly available on the U.S. Environmental Protection Agency's EnviroAtlas website (http://enviroatlas.epa.gov) for stakeholders interested in ecological restoration. PMID:29683130
ERIC Educational Resources Information Center
Benson, Phil, Ed.; Reinders, Hayo, Ed.
2011-01-01
This comprehensive exploration of theoretical and practical aspects of out-of-class teaching and learning, from a variety of perspectives and in various settings around the world, includes a theoretical overview of the field, 11 data-based case studies, and practical advice on materials development for independent learning. Contents of this book…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Design and Development of Web-Based Information Literacy Tutorials
ERIC Educational Resources Information Center
Su, Shiao-Feng; Kuo, Jane
2010-01-01
The current study conducts a thorough content analysis of recently built or up-to-date high-quality web-based information literacy tutorials contributed by academic libraries in a peer-reviewed database, PRIMO. This research analyzes the topics/skills PRIMO tutorials consider essential and the teaching strategies they consider effective. The…
Corporate Web Sites in Traditional Print Advertisements.
ERIC Educational Resources Information Center
Pardun, Carol J.; Lamb, Larry
1999-01-01
Describes the Web presence in print advertisements to determine how marketers are creating bridges between traditional advertising and the Internet. Content analysis showed Web addresses in print ads; categories of advertisers most likely to link print ads with Web sites; and whether the Web site attempts to develop a database of potential…
Advanced Image Search: A Strategy for Creating Presentation Boards
ERIC Educational Resources Information Center
Frey, Diane K.; Hines, Jean D.; Swinker, Mary E.
2008-01-01
Finding relevant digital images to create presentation boards requires advanced search skills. This article describes a course assignment involving a technique designed to develop students' literacy skills with respect to locating images of desired quality and content from Internet databases. The assignment was applied in a collegiate apparel…
The U.S. Environmental Protection Agency (EPA), through its ToxCast program, is developing predictive toxicity approaches that will use in vitro high-throughput screening (HTS), high-content screening (HCS) and toxicogenomic data to predict in vivo toxicity phenotypes. There are ...
ERIC Educational Resources Information Center
Nixon, Carol, Comp.
This book contains the presentations of the 16th annual Computers in Libraries Conference. Contents include: "Creating New Services & Opportunities through Web Databases"; "Influencing Change and Student Learning through Staff Development"; "Top Ten Navigation Tips"; "Library of the Year: Gwinnett County…
Xanthophyll (lutein, zeaxanthin) content in fruits, vegetables, and corn and egg products
USDA-ARS?s Scientific Manuscript database
Lutein and zeaxanthin are carotenoids that are selectively taken up into the macula of the eye where they are thought to protect against the development of age-related macular degeneration. Current dietary databases make it difficult to ascertain their individual roles in eye health because their co...
Planned and ongoing projects (pop) database: development and results.
Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice
2014-11-01
The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.
NASA Astrophysics Data System (ADS)
Damm, Bodo; Klose, Martin
2014-05-01
This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system running on PostgreSQL/PostGIS. This provides advanced functionality for spatial data analysis and forms the basis for future data provision and visualization using a WebGIS application. Analysis of landslide database contents shows that in most parts of Germany landslides primarily affect transportation infrastructures. Although with distinct lower frequency, recent landslides are also recorded to cause serious damage to hydraulic facilities and waterways, supply and disposal infrastructures, sites of cultural heritage, as well as forest, agricultural, and mining areas. The main types of landslide damage are failure of cut and fill slopes, destruction of retaining walls, street lights, and forest stocks, burial of roads, backyards, and garden areas, as well as crack formation in foundations, sewer lines, and building walls. Landslide repair and mitigation at transportation infrastructures is dominated by simple solutions such as catch barriers or rock fall drapery. These solutions are often undersized and fail under stress. The use of costly slope stabilization or protection systems is proven to reduce these risks effectively over longer maintenance cycles. The right balancing of landslide mitigation is thus a crucial problem in managing landslide risks. Development and analysis of such landslide databases helps to support decision-makers in finding efficient solutions to minimize landslide risks for human beings, infrastructures, and financial assets.
GlycomeDB – integration of open-access carbohydrate structure databases
Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm
2008-01-01
Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830
The Missing Link: Context Loss in Online Databases
ERIC Educational Resources Information Center
Mi, Jia; Nesta, Frederick
2005-01-01
Full-text databases do not allow for the complexity of the interaction of the human eye and brain with printed matter. As a result, both content and context may be lost. The authors propose additional indexing fields that would maintain the content and context of print in electronic formats.
The Western New York Health Resources Project: developing access to local health information.
Gray, S A; O'Shea, R; Petty, M E; Loonsk, J
1998-07-01
The Western New York Health Resources Project was created to fill a gap in online access to local health information resources describing the health of a defined geographic area. The project sought to identify and describe information scattered among many institutions, agencies, and individuals, and to create a database that would be widely accessible. The project proceeded in three phases with initial phases supported by grant funding. This paper describes the database development and selection of content, and concludes that a national online network of local health data representing the various geographic regions of the United States would contribute to the quality of health care in general.
The Western New York Health Resources Project: developing access to local health information.
Gray, S A; O'Shea, R; Petty, M E; Loonsk, J
1998-01-01
The Western New York Health Resources Project was created to fill a gap in online access to local health information resources describing the health of a defined geographic area. The project sought to identify and describe information scattered among many institutions, agencies, and individuals, and to create a database that would be widely accessible. The project proceeded in three phases with initial phases supported by grant funding. This paper describes the database development and selection of content, and concludes that a national online network of local health data representing the various geographic regions of the United States would contribute to the quality of health care in general. PMID:9681168
Europe PMC: a full-text literature database for the life sciences and platform for innovation
2015-01-01
This article describes recent developments of Europe PMC (http://europepmc.org), the leading database for life science literature. Formerly known as UKPMC, the service was rebranded in November 2012 as Europe PMC to reflect the scope of the funding agencies that support it. Several new developments have enriched Europe PMC considerably since then. Europe PMC now offers RESTful web services to access both articles and grants, powerful search tools such as citation-count sort order and data citation features, a service to add publications to your ORCID, a variety of export formats, and an External Links service that enables any related resource to be linked from Europe PMC content. PMID:25378340
Nuclear Energy Infrastructure Database Fitness and Suitability Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Brenden
In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less
Basner, Jodi E.; Theisz, Katrina I.; Jensen, Unni S.; Jones, C. David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G.; Franca-Koh, Jonathan; Hanlon, Sean E.; Kuhn, Nastaran Z.; Nagahara, Larry A.; Schnell, Joshua D.; Moore, Nicole M.
2013-01-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences—Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program. PMID:24808632
Evaluation of contents-based image retrieval methods for a database of logos on drug tablets
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien
2001-02-01
In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.
Nuclear Energy Infrastructure Database Description and User’s Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Brenden
In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less
NASA STI Database, Aerospace Database and ARIN coverage of 'space law'
NASA Technical Reports Server (NTRS)
Buchan, Ronald L.
1992-01-01
The space-law coverage provided by the NASA STI Database, the Aerospace Database, and ARIN is briefly described. Particular attention is given to the space law content of the two Databases and of ARIN, the NASA Thesauras space law terminology, space law publication forms, and the availability of the space law literature.
ERMes: Open Source Simplicity for Your E-Resource Management
ERIC Educational Resources Information Center
Doering, William; Chilton, Galadriel
2009-01-01
ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…
CliniWeb: managing clinical information on the World Wide Web.
Hersh, W R; Brown, K E; Donohoe, L C; Campbell, E M; Horacek, A E
1996-01-01
The World Wide Web is a powerful new way to deliver on-line clinical information, but several problems limit its value to health care professionals: content is highly distributed and difficult to find, clinical information is not separated from non-clinical information, and the current Web technology is unable to support some advanced retrieval capabilities. A system called CliniWeb has been developed to address these problems. CliniWeb is an index to clinical information on the World Wide Web, providing a browsing and searching interface to clinical content at the level of the health care student or provider. Its database contains a list of clinical information resources on the Web that are indexed by terms from the Medical Subject Headings disease tree and retrieved with the assistance of SAPHIRE. Limitations of the processes used to build the database are discussed, together with directions for future research.
Shuttle-Data-Tape XML Translator
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.
Team X Spacecraft Instrument Database Consolidation
NASA Technical Reports Server (NTRS)
Wallenstein, Kelly A.
2005-01-01
In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.
Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality.
Hinds, Richard M; Danna, Natalie R; Capo, John T; Mroczek, Kenneth J
2017-08-01
The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.
The SeaDataNet data products: regional temperature and salinity historical data collections
NASA Astrophysics Data System (ADS)
Simoncelli, Simona; Coatanoan, Christine; Bäck, Orjan; Sagen, Helge; Scoy, Serge; Myroshnychenko, Volodymyr; Schaap, Dick; Schlitzer, Reiner; Iona, Sissy; Fichaut, Michele
2016-04-01
Temperature and Salinity (TS) historical data collections covering the time period 1900-2013 were created for each European marginal sea (Arctic Sea, Baltic Sea, Black Sea, North Sea, North Atlantic Ocean and Mediterranean Sea) within the framework of SeaDataNet2 (SDN) EU-Project and they are now available as ODV collections through the SeaDataNet web catalog at http://sextant.ifremer.fr/en/web/seadatanet/. Two versions have been published and they represent a snapshot of the SDN database content at two different times: V1.1 (January 2014) and V2 (March 2015). A Quality Control Strategy (QCS) has been developped and continuously refined in order to improve the quality of the SDN database content and to create the best product deriving from SDN data. The QCS was originally implemented in collaboration with MyOcean2 and MyOcean Follow On projects in order to develop a true synergy at regional level to serve operational oceanography and climate change communities. The QCS involved the Regional Coordinators, responsible of the scientific assessment, the National Oceanographic Data Centers (NODC) and the data providers that, on the base of the data quality assessment outcome, checked and eventually corrected anomalies in the original data. The QCS consists of four main phases: 1) data harvesting from the central CDI; 2) file and parameter aggregation; 3) quality check analysis at regional level; 4) analysis and correction of data anomalies. The approach is iterative to facilitate the upgrade of SDN database content and it allows also the versioning of data products with the release of new regional data collections at the end of each QCS loop. SDN data collections and the QCS will be presented and the results summarized.
Medical applications: a database and characterization of apps in Apple iOS and Android platforms.
Seabrook, Heather J; Stromer, Julie N; Shevkenek, Cole; Bharwani, Aleem; de Grood, Jill; Ghali, William A
2014-08-27
Medical applications (apps) for smart phones and tablet computers are growing in number and are commonly used in healthcare. In this context, there is a need for a diverse community of app users, medical researchers, and app developers to better understand the app landscape. In mid-2012, we undertook an environmental scan and classification of the medical app landscape in the two dominant platforms by searching the medical category of the Apple iTunes and Google Play app download sites. We identified target audiences, functions, costs and content themes using app descriptions and captured these data in a database. We only included apps released or updated between October 1, 2011 and May 31, 2012, with a primary "medical" app store categorization, in English, that contained health or medical content. Our sample of Android apps was limited to the most popular apps in the medical category. Our final sample of Apple iOS (n = 4561) and Android (n = 293) apps illustrate a diverse medical app landscape. The proportion of Apple iOS apps for the public (35%) and for physicians (36%) is similar. Few Apple iOS apps specifically target nurses (3%). Within the Android apps, those targeting the public dominated in our sample (51%). The distribution of app functions is similar in both platforms with reference being the most common function. Most app functions and content themes vary considerably by target audience. Social media apps are more common for patients and the public, while conference apps target physicians. We characterized existing medical apps and illustrated their diversity in terms of target audience, main functions, cost and healthcare topic. The resulting app database is a resource for app users, app developers and health informatics researchers.
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
TOC/DOC: "It Has Changed the Way I Do Science".
ERIC Educational Resources Information Center
Douglas, Kimberly; Roth, Dana L.
1997-01-01
Describes a user-based automated service developed at the California Institute of Technology that combines access to journal article databases with an in-house document delivery system. TOC/DOC (Tables of Contents/Document Delivery) has undergone a conceptual change from a catalog of locally-held journal articles to a broader, more retrospective…
The Case for Creating a Scholars Portal to the Web: A White Paper.
ERIC Educational Resources Information Center
Campbell, Jerry D.
2001-01-01
Considers the need for reliable, scholarly access to the Web and suggests that the Association for Research Libraries, in partnership with OCLC and the Library of Congress, develop a so-called scholar's portal. Topics include quality content; enhanced library services; and gateway functions, including access to commercial databases and focused…
Geoinformatics paves the way for a zoo information system
NASA Astrophysics Data System (ADS)
Michel, Ulrich
2008-10-01
The use of modern electronic media offers new ways of (environmental) knowledge transfer. All kind of information can be made quickly available as well as queryable and can be processed individually. The Institute for Geoinformatics and Remote Sensing (IGF) in collaboration with the Osnabrueck Zoo, is developing a zoo information system, especially for new media (e.g. mobile devices), which provides information about the animals living there, their natural habitat and endangerment status. Thereby multimedia information is being offered to the zoo visitors. The implementation of the 2D/3D components is realized by modern database and Mapserver technologies. Among other technologies, the VRML (Virtual Reality Modeling Language) standard is used for the realization of the 3D visualization so that it can be viewed in every conventional web browser. Also, a mobile information system for Pocket PCs, Smartphones and Ultra Mobile PCs (UMPC) is being developed. All contents, including the coordinates, are stored in a PostgreSQL database. The data input, the processing and other administrative operations are executed by a content management system (CMS).
Integral nuclear data validation using experimental spent nuclear fuel compositions
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco; ...
2017-07-19
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Integral nuclear data validation using experimental spent nuclear fuel compositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Yue, Zongliang; Zheng, Qi; Neylon, Michael T; Yoo, Minjae; Shin, Jimin; Zhao, Zhiying; Tan, Aik Choon
2018-01-01
Abstract Integrative Gene-set, Network and Pathway Analysis (GNPA) is a powerful data analysis approach developed to help interpret high-throughput omics data. In PAGER 1.0, we demonstrated that researchers can gain unbiased and reproducible biological insights with the introduction of PAGs (Pathways, Annotated-lists and Gene-signatures) as the basic data representation elements. In PAGER 2.0, we improve the utility of integrative GNPA by significantly expanding the coverage of PAGs and PAG-to-PAG relationships in the database, defining a new metric to quantify PAG data qualities, and developing new software features to simplify online integrative GNPA. Specifically, we included 84 282 PAGs spanning 24 different data sources that cover human diseases, published gene-expression signatures, drug–gene, miRNA–gene interactions, pathways and tissue-specific gene expressions. We introduced a new normalized Cohesion Coefficient (nCoCo) score to assess the biological relevance of genes inside a PAG, and RP-score to rank genes and assign gene-specific weights inside a PAG. The companion web interface contains numerous features to help users query and navigate the database content. The database content can be freely downloaded and is compatible with third-party Gene Set Enrichment Analysis tools. We expect PAGER 2.0 to become a major resource in integrative GNPA. PAGER 2.0 is available at http://discovery.informatics.uab.edu/PAGER/. PMID:29126216
MGIS: managing banana (Musa spp.) genetic resources information and high-throughput genotyping data
Guignon, V.; Sempere, G.; Sardos, J.; Hueber, Y.; Duvergey, H.; Andrieu, A.; Chase, R.; Jenny, C.; Hazekamp, T.; Irish, B.; Jelali, K.; Adeka, J.; Ayala-Silva, T.; Chao, C.P.; Daniells, J.; Dowiya, B.; Effa effa, B.; Gueco, L.; Herradura, L.; Ibobondji, L.; Kempenaers, E.; Kilangi, J.; Muhangi, S.; Ngo Xuan, P.; Paofa, J.; Pavis, C.; Thiemele, D.; Tossou, C.; Sandoval, J.; Sutanto, A.; Vangu Paka, G.; Yi, G.; Van den houwe, I.; Roux, N.
2017-01-01
Abstract Unraveling the genetic diversity held in genebanks on a large scale is underway, due to advances in Next-generation sequence (NGS) based technologies that produce high-density genetic markers for a large number of samples at low cost. Genebank users should be in a position to identify and select germplasm from the global genepool based on a combination of passport, genotypic and phenotypic data. To facilitate this, a new generation of information systems is being designed to efficiently handle data and link it with other external resources such as genome or breeding databases. The Musa Germplasm Information System (MGIS), the database for global ex situ-held banana genetic resources, has been developed to address those needs in a user-friendly way. In developing MGIS, we selected a generic database schema (Chado), the robust content management system Drupal for the user interface, and Tripal, a set of Drupal modules which links the Chado schema to Drupal. MGIS allows germplasm collection examination, accession browsing, advanced search functions, and germplasm orders. Additionally, we developed unique graphical interfaces to compare accessions and to explore them based on their taxonomic information. Accession-based data has been enriched with publications, genotyping studies and associated genotyping datasets reporting on germplasm use. Finally, an interoperability layer has been implemented to facilitate the link with complementary databases like the Banana Genome Hub and the MusaBase breeding database. Database URL: https://www.crop-diversity.org/mgis/ PMID:29220435
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation
NASA Astrophysics Data System (ADS)
Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco
2013-02-01
A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.
Gluten content of medications.
Cruz, Joseph E; Cocchio, Craig; Lai, Pak Tsun; Hermes-DeSantis, Evelyn
2015-01-01
The establishment of a database for the identification of the presence of gluten in excipients of prescription medications is described. While resources are available to ascertain the gluten content of a given medication, these resources are incomplete and often do not contain a source and date of contact. The drug information service (DIS) at Robert Wood Johnson University Hospital (RWJUH) determined that directly contacting the manufacturer of a product is the best method to determine the gluten content of medications. The DIS sought to establish a resource for use within the institution and create directions for obtaining this information from manufacturers to ensure uniformity of the data collected. To determine the gluten content of a medication, the DIS analyzed the manufacturer's package insert to identify any statement indicating that the product contained gluten or inactive ingredients from known sources of gluten. If there was any question about the source of an inactive ingredient or if no information about gluten content appeared in the package insert, the manufacturer of the particular formulation of the queried medication was contacted to provide clarification. Manufacturers' responses were collected, and medications were categorized as "gluten free," "contains gluten," or "possibly contains gluten." To date, the DIS at RWJUH has received queries about 84 medications and has cataloged their gluten content. The DIS at RWJUH developed a database that categorizes the gluten status of medications, allowing clinicians to easily identify drugs that are safe for patients with celiac disease. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Integrating the IA2 Astronomical Archive in the VO: The VO-Dance Engine
NASA Astrophysics Data System (ADS)
Molinaro, M.; Laurino, O.; Smareglia, R.
2012-09-01
Virtual Observatory (VO) protocols and standards are getting mature and the astronomical community asks for astrophysical data to be easily reachable. This means data centers have to intensify their efforts to provide the data they manage not only through proprietary portals and services but also through interoperable resources developed on the basis of the IVOA (International Virtual Observatory Alliance) recommendations. Here we present the work and ideas developed at the IA2 (Italian Astronomical Archive) data center hosted by the INAF-OATs (Italian Institute for Astrophysics - Trieste Astronomical Observatory) to reach this goal. The core point is the development of an application that from existing DB and archive structures can translate their content to VO compliant resources: VO-Dance (written in Java). This application, in turn, relies on a database (potentially DBMS independent) to store the translation layer information of each resource and auxiliary content (UCDs, field names, authorizations, policies, etc.). The last token is an administrative interface (currently developed using the Django python framework) to allow the data center administrators to set up and maintain resources. This deployment, platform independent, with database and administrative interface highly customizable, means the package, when stable and easily distributable, can be also used by single astronomers or groups to set up their own resources from their public datasets.
Total choline and choline-containing moieties of commercially available pulses.
Lewis, Erin D; Kosik, Sarah J; Zhao, Yuan-Yuan; Jacobs, René L; Curtis, Jonathan M; Field, Catherine J
2014-06-01
Estimating dietary choline intake can be challenging due to missing foods in the current United States Department of Agriculture (USDA) database. The objectives of the study were to quantify the choline-containing moieties and the total choline content of a variety of pulses available in North America and use the expanded compositional database to determine the potential contribution of pulses to dietary choline intake. Commonly consumed pulses (n = 32) were analyzed by hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) and compared to the current USDA database. Cooking was found to reduce the relative percent from free choline and increased the contribution of phosphatidylcholine to total choline for most pulses (P < 0.05). Using the expanded database to estimate choline content of recipes using pulses as meat alternatives, resulted in a different estimation of choline content per serving (±30%), compared to the USDA database. These results suggest that when pulses are a large part of a meal or diet, the use of accurate food composition data should be used.
Challenges in developing medicinal plant databases for sharing ethnopharmacological knowledge.
Ningthoujam, Sanjoy Singh; Talukdar, Anupam Das; Potsangbam, Kumar Singh; Choudhury, Manabendra Dutta
2012-05-07
Major research contributions in ethnopharmacology have generated vast amount of data associated with medicinal plants. Computerized databases facilitate data management and analysis making coherent information available to researchers, planners and other users. Web-based databases also facilitate knowledge transmission and feed the circle of information exchange between the ethnopharmacological studies and public audience. However, despite the development of many medicinal plant databases, a lack of uniformity is still discernible. Therefore, it calls for defining a common standard to achieve the common objectives of ethnopharmacology. The aim of the study is to review the diversity of approaches in storing ethnopharmacological information in databases and to provide some minimal standards for these databases. Survey for articles on medicinal plant databases was done on the Internet by using selective keywords. Grey literatures and printed materials were also searched for information. Listed resources were critically analyzed for their approaches in content type, focus area and software technology. Necessity for rapid incorporation of traditional knowledge by compiling primary data has been felt. While citation collection is common approach for information compilation, it could not fully assimilate local literatures which reflect traditional knowledge. Need for defining standards for systematic evaluation, checking quality and authenticity of the data is felt. Databases focussing on thematic areas, viz., traditional medicine system, regional aspect, disease and phytochemical information are analyzed. Issues pertaining to data standard, data linking and unique identification need to be addressed in addition to general issues like lack of update and sustainability. In the background of the present study, suggestions have been made on some minimum standards for development of medicinal plant database. In spite of variations in approaches, existence of many overlapping features indicates redundancy of resources and efforts. As the development of global data in a single database may not be possible in view of the culture-specific differences, efforts can be given to specific regional areas. Existing scenario calls for collaborative approach for defining a common standard in medicinal plant database for knowledge sharing and scientific advancement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R
2007-12-11
Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.
ERIC Educational Resources Information Center
Uzunboylu, Huseyin; Genc, Zeynep
2017-01-01
The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…
Geothermal NEPA Database on OpenEI (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, K. R.; Levine, A.
2014-09-01
The National Renewable Energy Laboratory (NREL) developed the Geothermal National Environmental Policy Act (NEPA) Database as a platform for government agencies and industry to access and maintain information related to geothermal NEPA documents. The data were collected to inform analyses of NEPA timelines, and the collected data were made publically available via this tool in case others might find the data useful. NREL staff and contractors collected documents from agency websites, during visits to the two busiest Bureau of Land Management (BLM) field offices for geothermal development, and through email and phone call requests from other BLM field offices. Theymore » then entered the information into the database, hosted by Open Energy Information (http://en.openei.org/wiki/RAPID/NEPA). The long-term success of the project will depend on the willingness of federal agencies, industry, and others to populate the database with NEPA and related documents, and to use the data for their own analyses. As the information and capabilities of the database expand, developers and agencies can save time on new NEPA reports by accessing a single location to research related activities, their potential impacts, and previously proposed and imposed mitigation measures. NREL used a wiki platform to allow industry and agencies to maintain the content in the future so that it continues to provide relevant and accurate information to users.« less
Vivar, Juan C; Pemu, Priscilla; McPherson, Ruth; Ghosh, Sujoy
2013-08-01
Abstract Unparalleled technological advances have fueled an explosive growth in the scope and scale of biological data and have propelled life sciences into the realm of "Big Data" that cannot be managed or analyzed by conventional approaches. Big Data in the life sciences are driven primarily via a diverse collection of 'omics'-based technologies, including genomics, proteomics, metabolomics, transcriptomics, metagenomics, and lipidomics. Gene-set enrichment analysis is a powerful approach for interrogating large 'omics' datasets, leading to the identification of biological mechanisms associated with observed outcomes. While several factors influence the results from such analysis, the impact from the contents of pathway databases is often under-appreciated. Pathway databases often contain variously named pathways that overlap with one another to varying degrees. Ignoring such redundancies during pathway analysis can lead to the designation of several pathways as being significant due to high content-similarity, rather than truly independent biological mechanisms. Statistically, such dependencies also result in correlated p values and overdispersion, leading to biased results. We investigated the level of redundancies in multiple pathway databases and observed large discrepancies in the nature and extent of pathway overlap. This prompted us to develop the application, ReCiPa (Redundancy Control in Pathway Databases), to control redundancies in pathway databases based on user-defined thresholds. Analysis of genomic and genetic datasets, using ReCiPa-generated overlap-controlled versions of KEGG and Reactome pathways, led to a reduction in redundancy among the top-scoring gene-sets and allowed for the inclusion of additional gene-sets representing possibly novel biological mechanisms. Using obesity as an example, bioinformatic analysis further demonstrated that gene-sets identified from overlap-controlled pathway databases show stronger evidence of prior association to obesity compared to pathways identified from the original databases.
A Summary of Pavement and Material-Related Databases within the Texas Department of Transportation
DOT National Transportation Integrated Search
1999-09-01
This report summarizes important content and operational details about five different materials and pavement databases currently used by the Texas Department of Transportation (TxDOT). These databases include the Pavement Management Information Syste...
Automated Database Mediation Using Ontological Metadata Mappings
Marenco, Luis; Wang, Rixin; Nadkarni, Prakash
2009-01-01
Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801
The DREO Elint Browser Utility (DEBU) reference manual
NASA Astrophysics Data System (ADS)
Ford, Barbara; Jones, David
1992-04-01
An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.
ERIC Educational Resources Information Center
Eyler, Amy A.; Brownson, Ross C.; Aytur, Semra A.; Cradock, Angie L.; Doescher, Mark; Evenson, Kelly R.; Kerr, Jacqueline; Maddock, Jay; Pluto, Delores L.; Steinman, Lesley; Tompkins, Nancy O'Hara; Troped, Philip; Schmid, Thomas L.
2010-01-01
Objectives: To develop a comprehensive inventory of state physical education (PE) legislation, examine trends in bill introduction, and compare bill factors. Methods: State PE legislation from January 2001 to July 2007 was identified using a legislative database. Analysis included components of evidence-based school PE from the Community Guide and…
Examination of Studies on Technology-Assisted Collaborative Learning Published between 2010-2014
ERIC Educational Resources Information Center
Arnavut, Ahmet; Özdamli, Fezile
2016-01-01
This study is a content analysis of the articles about technology-assisted collaborative learning published in Science Direct database between the years of 2010 and 2014. Developing technology has become a topic that we encounter in every aspect of our lives. Educators deal with the contribution and integration of technology into education.…
Human Resource Development Practices in Russia: A Structured Literature Review
ERIC Educational Resources Information Center
Plakhotnik, Maria S.
2005-01-01
This literature review aimed to investigate the literature on HRD in Russian enterprises, U.S. firms in Russia, or U.S.-Russian joint ventures to determine the role and function of HRD practitioners in creating a successful economic transition. Thirty-three articles were selected through a database search and examined using content analysis. Three…
Gene: a gene-centered information resource at NCBI.
Brown, Garth R; Hem, Vichet; Katz, Kenneth S; Ovetsky, Michael; Wallin, Craig; Ermolaeva, Olga; Tolstoy, Igor; Tatusova, Tatiana; Pruitt, Kim D; Maglott, Donna R; Murphy, Terence D
2015-01-01
The National Center for Biotechnology Information's (NCBI) Gene database (www.ncbi.nlm.nih.gov/gene) integrates gene-specific information from multiple data sources. NCBI Reference Sequence (RefSeq) genomes for viruses, prokaryotes and eukaryotes are the primary foundation for Gene records in that they form the critical association between sequence and a tracked gene upon which additional functional and descriptive content is anchored. Additional content is integrated based on the genomic location and RefSeq transcript and protein sequence data. The content of a Gene record represents the integration of curation and automated processing from RefSeq, collaborating model organism databases, consortia such as Gene Ontology, and other databases within NCBI. Records in Gene are assigned unique, tracked integers as identifiers. The content (citations, nomenclature, genomic location, gene products and their attributes, phenotypes, sequences, interactions, variation details, maps, expression, homologs, protein domains and external databases) is available via interactive browsing through NCBI's Entrez system, via NCBI's Entrez programming utilities (E-Utilities and Entrez Direct) and for bulk transfer by FTP. Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Data-Base Software For Tracking Technological Developments
NASA Technical Reports Server (NTRS)
Aliberti, James A.; Wright, Simon; Monteith, Steve K.
1996-01-01
Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.
Scholarly Online Database Use in Higher Education: A Faculty Survey.
ERIC Educational Resources Information Center
Piotrowski, Chris; Perdue, Bob; Armstrong, Terry
2005-01-01
The present study reports the results of a survey conducted at the University of West Florida concerning faculty usage and views toward online databases. Most respondents (N=46) felt quite satisfied with scholarly database availability through the university library. However, some faculty suggested that databases such as Current Contents and…
The Cystic Fibrosis Database: Content and Research Opportunities.
ERIC Educational Resources Information Center
Shaw, William M., Jr.; And Others
1991-01-01
Describes the files contained in the Cystic Fibrosis (CF) database and discusses educational and research opportunities using this database. Topics discussed include queries, evaluating the relevance of items retrieved, and use of the database in an online searching course in the School of Information and Library Science at the University of North…
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
Van Wave, Timothy W; Decker, Michael
2003-04-01
Development of a method using marketing research data to assess food purchase behavior and consequent nutrient availability for purposes of nutrition surveillance, evaluation of intervention effects, and epidemiologic studies of diet-health relationships. Data collected on household food purchases accrued over a 13-week period were selected by using Universal Product Code numbers and household characteristics from a marketing research database. Universal Product Code numbers for 39,408 dairy product purchases were linked to a standard reference for food composition to estimate the nutrient content of foods purchased over time. Two thousand one hundred sixty-one households located in Victoria, Texas, and surrounding communities who were active members of a frequent shopper program. Demographic characteristics of sample households and the nutrient content of their dairy product purchases were analyzed using frequency distribution, cross tabulation, analysis of variance, and t test procedures. A method for using marketing research data was successfully used to estimate household purchases of specific foods and their nutrient content from a marketing database containing hundreds of thousands of records. Distribution of dairy product purchases and their concomitant nutrients between Hispanic and non-Hispanic households were significant (P<.01, P<.001, respectively) and sustained over time. Purchase records from large, nationally representative panels of shoppers, such as those maintained by major market research companies, might be used to accomplish detailed longitudinal epidemiologic studies or surveillance of national food- and nutrient-purchasing patterns within and between countries and segments of their respective populations.
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
16 CFR 1102.12 - Manufacturer comments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...
16 CFR 1102.12 - Manufacturer comments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Content Requirements... private labeler in the Database if such manufacturer comment meets the following requirements: (1... that is submitted for publication in the Database. (2) Unique identifier. A manufacturer comment must...
16 CFR 1102.12 - Manufacturer comments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Notice and Disclosure... of the contents of the Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database...
Robasky, Kimberly; Bulyk, Martha L
2011-01-01
The Universal PBM Resource for Oligonucleotide-Binding Evaluation (UniPROBE) database is a centralized repository of information on the DNA-binding preferences of proteins as determined by universal protein-binding microarray (PBM) technology. Each entry for a protein (or protein complex) in UniPROBE provides the quantitative preferences for all possible nucleotide sequence variants ('words') of length k ('k-mers'), as well as position weight matrix (PWM) and graphical sequence logo representations of the k-mer data. In this update, we describe >130% expansion of the database content, incorporation of a protein BLAST (blastp) tool for finding protein sequence matches in UniPROBE, the introduction of UniPROBE accession numbers and additional database enhancements. The UniPROBE database is available at http://uniprobe.org.
SFINX-a drug-drug interaction database designed for clinical decision support systems.
Böttiger, Ylva; Laine, Kari; Andersson, Marine L; Korhonen, Tuomas; Molin, Björn; Ovesjö, Marie-Louise; Tirkkonen, Tuire; Rane, Anders; Gustafsson, Lars L; Eiermann, Birgit
2009-06-01
The aim was to develop a drug-drug interaction database (SFINX) to be integrated into decision support systems or to be used in website solutions for clinical evaluation of interactions. Key elements such as substance properties and names, drug formulations, text structures and references were defined before development of the database. Standard operating procedures for literature searches, text writing rules and a classification system for clinical relevance and documentation level were determined. ATC codes, CAS numbers and country-specific codes for substances were identified and quality assured to ensure safe integration of SFINX into other data systems. Much effort was put into giving short and practical advice regarding clinically relevant drug-drug interactions. SFINX includes over 8,000 interaction pairs and is integrated into Swedish and Finnish computerised decision support systems. Over 31,000 physicians and pharmacists are receiving interaction alerts through SFINX. User feedback is collected for continuous improvement of the content. SFINX is a potentially valuable tool delivering instant information on drug interactions during prescribing and dispensing.
Palaeo-sea-level and palaeo-ice-sheet databases: problems, strategies, and perspectives
NASA Astrophysics Data System (ADS)
Düsterhus, André; Rovere, Alessio; Carlson, Anders E.; Horton, Benjamin P.; Klemann, Volker; Tarasov, Lev; Barlow, Natasha L. M.; Bradwell, Tom; Clark, Jorie; Dutton, Andrea; Gehrels, W. Roland; Hibbert, Fiona D.; Hijma, Marc P.; Khan, Nicole; Kopp, Robert E.; Sivan, Dorit; Törnqvist, Torbjörn E.
2016-04-01
Sea-level and ice-sheet databases have driven numerous advances in understanding the Earth system. We describe the challenges and offer best strategies that can be adopted to build self-consistent and standardised databases of geological and geochemical information used to archive palaeo-sea-levels and palaeo-ice-sheets. There are three phases in the development of a database: (i) measurement, (ii) interpretation, and (iii) database creation. Measurement should include the objective description of the position and age of a sample, description of associated geological features, and quantification of uncertainties. Interpretation of the sample may have a subjective component, but it should always include uncertainties and alternative or contrasting interpretations, with any exclusion of existing interpretations requiring a full justification. During the creation of a database, an approach based on accessibility, transparency, trust, availability, continuity, completeness, and communication of content (ATTAC3) must be adopted. It is essential to consider the community that creates and benefits from a database. We conclude that funding agencies should not only consider the creation of original data in specific research-question-oriented projects, but also include the possibility of using part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
The Pan European Phenological Database PEP725: Data Content and Data Quality Control Procedures
NASA Astrophysics Data System (ADS)
Jurkovic, Anita; Hübner, Thomas; Koch, Elisabeth; Lipa, Wolfgang; Scheifinger, Helfried; Ungersböck, Markus; Zach-Hermann, Susanne
2014-05-01
Phenology - the study of the timing of recurring biological events in the animal and plant world - has become an important approach for climate change impact studies in recent years. It is therefore a "conditio sine qua non" to collect, archive, digitize, control and update phenological datasets. Thus and with regard to cross-border cooperation and activities it was necessary to establish, operate and promote a pan European phenological database (PEP725). Such a database - designed and tested under cost action 725 in 2004 and further developed and maintained in the framework of the EUMETNET program PEP725 - collects data from different European governmental and nongovernmental institutions and thus offers a unique compilation of plant phenological observations. The data follows the same classification scheme - the so called BBCH coding system - that makes datasets comparable. Europe had a long tradition in the observation of phenological events: the history of collecting phenological data and their usage in climatology began in 1751. The first datasets in PEP725 date back to 1868. However, there are only a few observations available until 1950. From 1951 onwards, the phenological networks all over Europe developed rapidly: Currently, PEP725 provides about 9 million records from 23 European countries (covering approximately 50% of Europe). To supply the data in a good and uniform quality it is essential and worthwhile to establish and develop data quality control procedures. Consequently, one of the main tasks within PEP725 is the conception of a multi-stage-quality control. Currently the tests are stepwise composed: completeness -, plausibility -, time consistency -, climatological - and statistical checks. In a nutshell: The poster exemplifies the status quo of the data content of the PEP725 database and incipient stages of used and planned quality controls, respectively. For more details, we would also like to promote and refer to the PEP725 website (http://www.pep725.eu) and invite additional institutions and regional services to join our program.
16 CFR § 1102.12 - Manufacturer comments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...
Study of Temporal Effects on Subjective Video Quality of Experience.
Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad
2017-11-01
HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.
An integrated developmental model for studying identity content in context.
Galliher, Renee V; McLean, Kate C; Syed, Moin
2017-11-01
Historically, identity researchers have placed greater emphasis on processes of identity development (how people develop their identities) and less on the content of identity (what the identity is). The relative neglect of identity content may reflect the lack of a comprehensive framework to guide research. In this article, we provide such a comprehensive framework for the study of the content of identity, including 4 levels of analysis. At the broadest level, we situate individual identity within historical, cultural, and political contexts, elaborating on identity development within the context of shifting cultural norms, values, and attitudes. Histories of prejudice and discrimination are relevant in shaping intersections among historically marginalized identities. Second, we examine social roles as unique and central contexts for identity development, such that relationship labels become integrated into a larger identity constellation. Third, domains of individual or personal identity content intersect to yield a sense of self in which various aspects are subjectively experienced as an integrated whole. We explore the negotiation of culturally marginalized and dominant identity labels, as well as idiosyncratic aspects of identities based on unique characteristics or group memberships. Finally, we argue that the content of identity is enacted at the level of everyday interactions, the "micro-level" of identity. The concepts of identity conflict, coherence, and compartmentalization are presented as strategies used to navigate identity content across these 4 levels. This framework serves as an organizing tool for the current literature, as well as for designing future studies on the identity development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Zhao, Xudong; Liu, Liang; Hu, Chengping; Chen, Fazhan; Sun, Xirong
2017-07-01
It has been nearly 40 years since the reform and opening up of Mainland China. The mental health services system has developed rapidly as a part of the profound socioeconomic changes that ensued. However, its development has not been as substantial as other areas of medical care. For the current qualitative systematic review, we searched databases, including China Biology Medicine disc, Weipu, China National Knowledge Infrastructure, Wanfang digital periodical full text data, China's important newspaper full text database, China Statistical Yearbook database, etc. The content of primary research, literature, and policy papers about the evolution and development of Chinese mental health services was systemically reviewed and analysed by using thematic analysis. Two main themes relative to the necessity and feasibility of reforming the current mental health services system emerged. We discuss 5 corresponding subthemes under the umbrella of the necessity of improving the current treatment, rehabilitation, prevention, and service systems and 7 requirements for the feasibility of reforming the current system. We conclude that as the development of the Chinese economy and the spirit of humanistic care continue, the improvement and reformation of the mental health services system are both necessary and feasible. Copyright © 2017 John Wiley & Sons, Ltd.
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
Floegel, Anna; Kim, Dae-Ok; Chung, Sang-Jin; Song, Won O; Fernandez, Maria Luz; Bruno, Richard S; Koo, Sung I; Chun, Ock K
2010-09-01
Estimation of total antioxidant intake is the first step to investigate the protective effects of antioxidants on oxidative stress-mediated disease. The present study was designed to develop an algorithm to estimate total antioxidant capacity (TAC) of the US diet. TAC of individual antioxidants and 50 popular antioxidant-rich food items in the US diet were determined by 2,2'-azino-bis-3-ethylbenzthiazoline-6-sulphonic acid (ABTS) assay and the 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay. Theoretical TAC of foods was calculated as the sum of individual antioxidant capacities of compounds. The top 10 TAC food items in the US diet according to standard serving size were blueberry > plum > green tea > strawberry > green tea (decaffeinated) > red wine > grape juice > black tea > cherry > grape. Major contributors to TAC were the total phenolic content (r = 0.952, P < 0.001) and flavonoid content (r = 0.827, P < 0.001) of 50 foods. Theoretical TAC was positively correlated to experimental TAC of 50 foods determined by the ABTS assay (r = 0.833, P < 0.001) and the DPPH assay (r = 0.696, P < 0.001), and to TAC from the USDA database for the oxygen radical absorbance capacity (r = 0.484, P = 0.001, n = 44). The TAC database of the US diet has been established and validated. In future studies, TAC of the US diet can be linked to biomarkers of chronic disease.
Lack, N
2001-08-01
The introduction of the modified data set for quality assurance in obstetrics (formerly perinatal survey) in Lower Saxony and Bavaria as early as 1999 saw the urgent requirement for a corresponding new statistical analysis of the revised data. The general outline of a new data reporting concept was originally presented by the Bavarian Commission for Perinatology and Neonatology at the Munich Perinatal Conference in November 1997. These ideas are germinal to content and layout of the new quality report for obstetrics currently in its nationwide harmonisation phase coordinated by the federal office for quality assurance in hospital care. A flexible and modular database oriented analysis tool developed in Bavaria is now in its second year of successful operation. The functionalities of this system are described in detail.
Harst, Lorenz; Timpel, Patrick; Otto, Lena; Wollschlaeger, Bastian; Richter, Peggy; Schlieter, Hannes
2018-01-01
This paper presents an approach for an evaluation of finished telemedicine projects using qualitative methods. Telemedicine applications are said to improve the performance of health care systems. While there are countless telemedicine projects, the vast majority never makes the threshold from testing to implementation and diffusion. Projects were collected from German project databases in the area of telemedicine following systematically developed criteria. In a testing phase, ten projects were subject to a qualitative content analysis to identify limitations, need for further research, and lessons learned. Using Mayring's method of inductive category development, six categories of possible future research were derived. Thus, the proposed method is an important contribution to diffusion and translation research regarding telemedicine, as it is applicable to a systematic research of databases.
A controlled nursing vocabulary for indexing and information retrieval.
Pekkala, Eila; Saranto, Kaija; Tallberg, Marianne; Ensio, Anneli; Junttila, Kristiina
2006-01-01
The lack of a nursing thesaurus in Finnish has emerged among nursing professionals searching nursing knowledge and librarians when indexing literature to databases. The Finnish Nursing Education Society launched a project focusing on the development of a nursing vocabulary and the compilation of a thesaurus. The content of a vocabulary was created by six experts using Delphi-technique. The validity of the vocabulary was twice tested for indexing nursing research and has afterwards been revised. The vocabulary can be used for indexing and information retrieval purposes. The main challenge is that nurses easily can find national as well as international nursing research from databases and enhance research utilization.
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
The Status of Statewide Subscription Databases
ERIC Educational Resources Information Center
Krueger, Karla S.
2012-01-01
This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…
16 CFR 1102.10 - Reports of harm.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.10 Reports of harm. (a... they have a public safety purpose. (b) Manner of submission. To be entered into the Database, reports... Commission will publish in the Publicly Available Consumer Product Safety Information Database reports of...
Bigger Is (Maybe) Better: Librarians' Views of Interdisciplinary Databases
ERIC Educational Resources Information Center
Gilbert, Julie K.
2010-01-01
This study investigates librarians' satisfaction with general interdisciplinary databases for undergraduate research and explores possibilities for improving these databases. Results from a national survey suggest that librarians at a variety of institutions are relatively satisfied overall with the content and usability of general,…
16 CFR 1102.10 - Reports of harm.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.10 Reports of harm. (a... they have a public safety purpose. (b) Manner of submission. To be entered into the Database, reports... Commission will publish in the Publicly Available Consumer Product Safety Information Database reports of...
16 CFR 1102.10 - Reports of harm.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Content Requirements § 1102.10... they have a public safety purpose. (b) Manner of submission. To be entered into the Database, reports... Commission will publish in the Publicly Available Consumer Product Safety Information Database reports of...
Sullivan, Catherine M; Leon, Janeen B; Sehgal, Ashwini R
2007-09-01
Phosphorus-containing additives are increasingly being added to food products. We sought to determine the potential impact of these additives. We focused on chicken products as an example. We purchased a variety of chicken products, prepared them according to package directions, and performed laboratory analyses to determine their actual phosphorus content. We used ESHA Food Processor SQL Software (version 9.8, ESHA Research, Salem, OR) to determine the expected phosphorus content of each product. Of 38 chicken products, 35 (92%) had phosphorus-containing additives listed among their ingredients. For every category of chicken products containing additives, the actual phosphorus content was greater than the content expected from nutrient database. For example, actual phosphorus content exceeded expected phosphorus content by an average of 84 mg/100 g for breaded breast strips. There was also a great deal of variation within each category. For example, the difference between actual and expected phosphorus content ranged from 59-165 mg/100 g for breast patties. Two 100-g servings of additive-containing products contained, on average, 440 mg of phosphorus, or about half the total daily recommended intake for dialysis patients. Phosphorus-containing additives significantly increase the amount of phosphorus in chicken products. Available nutrient databases do not reflect this higher phosphorus content, and the variation between similar products makes it impossible for patients and dietitians to accurately estimate phosphorus content. We recommend that dialysis patients limit their intake of additive-containing products, and that the phosphorus content of food products be included on nutrition facts labels.
Sullivan, Catherine M.; Leon, Janeen B.; Sehgal, Ashwini R.
2007-01-01
Objective Phosphorus containing additives are increasingly added to food products. We sought to determine the potential impact of these additives. We focused on chicken products as an example. Methods We purchased a variety of chicken products, prepared them according to package directions, and performed laboratory analyses to determine their actual phosphorus content. We used ESHA Food Processor SQL Software to determine the expected phosphorus content of each product. Results Of 38 chicken products, 35 (92%) had phosphorus containing additives listed among their ingredients. For every category of chicken products containing additives, the actual phosphorus content was greater than the content expected from nutrient database. For example, actual phosphorus content exceeded expected phosphorus content by an average of 84 mg/100g for breaded breast strips. There was also a great deal of variation within each category. For example, the difference between actual and expected phosphorus content ranged from 59 to 165 mg/100g for breast patties. Two 100 g servings of additive containing products contain an average of 440 mg of phosphorus, or about half the total daily recommended intake for dialysis patients. Conclusion Phosphorus containing additives significantly increase the amount of phosphorus in chicken products. Available nutrient databases do not reflect this higher phosphorus content, and the variation between similar products makes it impossible for patients and dietitians to accurately estimate phosphorus content. We recommend that dialysis patients limit their intake of additive containing products and that the phosphorus content of food products be included on nutrition facts labels. PMID:17720105
ERIC Educational Resources Information Center
Shabajee, Paul; Miller, Libby; Dingley, Andy
A group of research projects based at HP-Labs Bristol, the University of Bristol (England) and ARKive (a new large multimedia database project focused on the worlds biodiversity based in the United Kingdom) are working to develop a flexible model for the indexing of multimedia collections that allows users to annotate content utilizing extensible…
An overview of the national EEOC ADA research project.
McMahon, Brian T; Edwards, Ronald; Rumrill, Phillip D; Hursh, Norman
2005-01-01
The authors outline the development and scope of the National EEOC ADA Research Project which resulted from a cooperative agreement between the Equal Employment Opportunity Commission and Virginia Commonwealth University. Research questions, the EEOC database, extraction of study data, limitations of the data, the organization of research teams, and the contents of this special issue of WORK are described.
Following the Yellow Brick Road to Simplified Link Management
ERIC Educational Resources Information Center
Engard, Nicole C.
2005-01-01
Jenkins Law Library is the oldest law library in America, and has a reputation for offering great content not only to local attorneys, but also to the entire legal research community. In this article, the author, who is Web manager at Jenkins, describes the development of an automated process by which research links can be added to the database so…
16 CFR 1102.16 - Additional information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.16 Additional... in the Database any additional information it determines to be in the public interest, consistent...
16 CFR 1102.16 - Additional information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.16 Additional... in the Database any additional information it determines to be in the public interest, consistent...
Ab Initio Design of Potent Anti-MRSA Peptides based on Database Filtering Technology
Mishra, Biswajit; Wang, Guangshun
2012-01-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed.1 This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g. amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database2 by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 minutes. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. A combination of our ab initio design with database screening3 led to yet another peptide with enhanced potency. Because of simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well. PMID:22803960
Ab initio design of potent anti-MRSA peptides based on database filtering technology.
Mishra, Biswajit; Wang, Guangshun
2012-08-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed. This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when the most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g., amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 min. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. Our ab initio design combined with database screening led to yet another peptide with enhanced potency. Because of the simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Khan, Aihab; Husain, Syed Afaq
2013-01-01
We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.
SIMS: addressing the problem of heterogeneity in databases
NASA Astrophysics Data System (ADS)
Arens, Yigal
1997-02-01
The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.
ECLSS evolution: Advanced instrumentation interface requirements. Volume 3: Appendix C
NASA Technical Reports Server (NTRS)
1991-01-01
An Advanced ECLSS (Environmental Control and Life Support System) Technology Interfaces Database was developed primarily to provide ECLSS analysts with a centralized and portable source of ECLSS technologies interface requirements data. The database contains 20 technologies which were previously identified in the MDSSC ECLSS Technologies database. The primary interfaces of interest in this database are fluid, electrical, data/control interfaces, and resupply requirements. Each record contains fields describing the function and operation of the technology. Fields include: an interface diagram, description applicable design points and operating ranges, and an explaination of data, as required. A complete set of data was entered for six of the twenty components including Solid Amine Water Desorbed (SAWD), Thermoelectric Integrated Membrane Evaporation System (TIMES), Electrochemical Carbon Dioxide Concentrator (EDC), Solid Polymer Electrolysis (SPE), Static Feed Electrolysis (SFE), and BOSCH. Additional data was collected for Reverse Osmosis Water Reclaimation-Potable (ROWRP), Reverse Osmosis Water Reclaimation-Hygiene (ROWRH), Static Feed Solid Polymer Electrolyte (SFSPE), Trace Contaminant Control System (TCCS), and Multifiltration Water Reclamation - Hygiene (MFWRH). A summary of the database contents is presented in this report.
NASA Technical Reports Server (NTRS)
Ho, C. Y.; Li, H. H.
1989-01-01
A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.
NASA Astrophysics Data System (ADS)
Chan, V. S.; Wong, C. P. C.; McLean, A. G.; Luo, G. N.; Wirth, B. D.
2013-10-01
The Xolotl code under development by PSI-SciDAC will enhance predictive modeling capability of plasma-facing materials under burning plasma conditions. The availability and application of experimental data to compare to code-calculated observables are key requirements to validate the breadth and content of physics included in the model and ultimately gain confidence in its results. A dedicated effort has been in progress to collect and organize a) a database of relevant experiments and their publications as previously carried out at sample exposure facilities in US and Asian tokamaks (e.g., DIII-D DiMES, and EAST MAPES), b) diagnostic and surface analysis capabilities available at each device, and c) requirements for future experiments with code validation in mind. The content of this evolving database will serve as a significant resource for the plasma-material interaction (PMI) community. Work supported in part by the US Department of Energy under GA-DE-SC0008698, DE-AC52-07NA27344 and DE-AC05-00OR22725.
Finding knowledge translation articles in CINAHL.
Lokker, Cynthia; McKibbon, K Ann; Wilczynski, Nancy L; Haynes, R Brian; Ciliska, Donna; Dobbins, Maureen; Davis, David A; Straus, Sharon E
2010-01-01
The process of moving research into practice has a number of names including knowledge translation (KT). Researchers and decision makers need to be able to readily access the literature on KT for the field to grow and to evaluate the existing evidence. To develop and validate search filters for finding KT articles in the database Cumulative Index to Nursing and Allied Health (CINAHL). A gold standard database was constructed by hand searching and classifying articles from 12 journals as KT Content, KT Applications and KT Theory. Sensitivity, specificity, precision, and accuracy of the search filters. Optimized search filters had fairly low sensitivity and specificity for KT Content (58.4% and 64.9% respectively), while sensitivity and specificity increased for retrieving KT Application (67.5% and 70.2%) and KT Theory articles (70.4% and 77.8%). Search filter performance was suboptimal marking the broad base of disciplines and vocabularies used by KT researchers. Such diversity makes retrieval of KT studies in CINAHL difficult.
5SRNAdb: an information resource for 5S ribosomal RNAs.
Szymanski, Maciej; Zielezinski, Andrzej; Barciszewski, Jan; Erdmann, Volker A; Karlowski, Wojciech M
2016-01-04
Ribosomal 5S RNA (5S rRNA) is the ubiquitous RNA component found in the large subunit of ribosomes in all known organisms. Due to its small size, abundance and evolutionary conservation 5S rRNA for many years now is used as a model molecule in studies on RNA structure, RNA-protein interactions and molecular phylogeny. 5SRNAdb (http://combio.pl/5srnadb/) is the first database that provides a high quality reference set of ribosomal 5S RNAs (5S rRNA) across three domains of life. Here, we give an overview of new developments in the database and associated web tools since 2002, including updates to database content, curation processes and user web interfaces. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The BRENDA enzyme information system-From a database to an expert system.
Schomburg, I; Jeske, L; Ulbrich, M; Placzek, S; Chang, A; Schomburg, D
2017-11-10
Enzymes, representing the largest and by far most complex group of proteins, play an essential role in all processes of life, including metabolism, gene expression, cell division, the immune system, and others. Their function, also connected to most diseases or stress control makes them interesting targets for research and applications in biotechnology, medical treatments, or diagnosis. Their functional parameters and other properties are collected, integrated, and made available to the scientific community in the BRaunschweig ENzyme DAtabase (BRENDA). In the last 30 years BRENDA has developed into one of the most highly used biological databases worldwide. The data contents, the process of data acquisition, data integration and control, the ways to access the data, and visualizations provided by the website are described and discussed. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Description of the MHS Health Level 7 Chemistry Laboratory for Public Health Surveillance
2012-09-01
document provides a history of the HL7 chemistry database and its contents, explains the creation of chemistry/serology records, describes the pathway...in health surveillance activities. This technical document discusses the chemistry database by providing a history of the dataset and its contents...source for its usefulness in public health surveillance. While HL7 data also includes radiology, anatomic pathology reports and pharmacy transactions
QBIC project: querying images by content, using color, texture, and shape
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel
1993-04-01
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
16 CFR § 1102.10 - Reports of harm.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.10 Reports of... they have a public safety purpose. (b) Manner of submission. To be entered into the Database, reports... Commission will publish in the Publicly Available Consumer Product Safety Information Database reports of...
Gazan, Rozenn; Barré, Tangui; Perignon, Marlène; Maillot, Matthieu; Darmon, Nicole; Vieux, Florent
2018-01-01
The holistic approach required to assess diet sustainability is hindered by lack of comprehensive databases compiling relevant food metrics. Those metrics are generally scattered in different data sources with various levels of aggregation hampering their matching. The objective was to develop a general methodology to compile food metrics describing diet sustainability dimensions into a single database and to apply it to the French context. Each step of the methodology is detailed: indicators and food metrics identification and selection, food list definition, food matching and values assignment. For the French case, nutrient and contaminant content, bioavailability factors, distribution of dietary intakes, portion sizes, food prices, greenhouse gas emission, acidification and marine eutrophication estimates were allocated to 212 commonly consumed generic foods. This generic database compiling 279 metrics will allow the simultaneous evaluation of the four dimensions of diet sustainability, namely health, economic, social and environmental, dimensions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hierarchical data security in a Query-By-Example interface for a shared database.
Taylor, Merwyn
2002-06-01
Whenever a shared database resource, containing critical patient data, is created, protecting the contents of the database is a high priority goal. This goal can be achieved by developing a Query-By-Example (QBE) interface, designed to access a shared database, and embedding within the QBE a hierarchical security module that limits access to the data. The security module ensures that researchers working in one clinic do not get access to data from another clinic. The security can be based on a flexible taxonomy structure that allows ordinary users to access data from individual clinics and super users to access data from all clinics. All researchers submit queries through the same interface and the security module processes the taxonomy and user identifiers to limit access. Using this system, two different users with different access rights can submit the same query and get different results thus reducing the need to create different interfaces for different clinics and access rights.
NASA Astrophysics Data System (ADS)
van Rensburg, L.; Claassens, S.; Bezuidenhout, J. J.; Jansen van Rensburg, P. J.
2009-03-01
The much publicised problem with major asbestos pollution and related health issues in South Africa, has called for action to be taken to negate the situation. The aim of this project was to establish a prioritisation index that would provide a scientifically based sequence in which polluted asbestos mines in Southern Africa ought to be rehabilitated. It was reasoned that a computerised database capable of calculating such a Rehabilitation Prioritisation Index (RPI) would be a fruitful departure from the previously used subjective selection prone to human bias. The database was developed in Microsoft Access and both quantitative and qualitative data were used for the calculation of the RPI value. The logical database structure consists of a number of mines, each consisting of a number of dumps, for which a number of samples have been analysed to determine asbestos fibre contents. For this system to be accurate as well as relevant, the data in the database should be revalidated and updated on a regular basis.
Hydroacoustic propagation grids for the CTBT knowledge databaes BBN technical memorandum W1303
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Angell
1998-05-01
The Hydroacoustic Coverage Assessment Model (HydroCAM) has been used to develop components of the hydroacoustic knowledge database required by operational monitoring systems, particularly the US National Data Center (NDC). The database, which consists of travel time, amplitude correction and travel time standard deviation grids, is planned to support source location, discrimination and estimation functions of the monitoring network. The grids will also be used under the current BBN subcontract to support an analysis of the performance of the International Monitoring System (IMS) and national sensor systems. This report describes the format and contents of the hydroacoustic knowledgebase grids, and themore » procedures and model parameters used to generate these grids. Comparisons between the knowledge grids, measured data and other modeled results are presented to illustrate the strengths and weaknesses of the current approach. A recommended approach for augmenting the knowledge database with a database of expected spectral/waveform characteristics is provided in the final section of the report.« less
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
Content-based image retrieval on mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef
2005-03-01
Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.
The Internet as a communication tool for orthopedic spine fellowships in the United States.
Silvestre, Jason; Guzman, Javier Z; Skovrlj, Branko; Overley, Samuel C; Cho, Samuel K; Qureshi, Sheeraz A; Hecht, Andrew C
2015-04-01
Orthopedic residents seeking additional training in spine surgery commonly use the Internet to manage their fellowship applications. Although studies have assessed the accessibility and content of Web sites in other medical specialties, none have looked at orthopedic spine fellowship Web sites (SFWs). The purpose of this study was to evaluate the accessibility of information from commonly used databases and assess the content of SFWs. This was a Web site accessibility and content evaluation study. A comprehensive list of available orthopedic spine fellowship programs was compiled by accessing program lists from the SF Match, North American Spine Society, Fellowship and Residency Electronic Interactive Database (FREIDA), and Orthopaedicsone.com (Ortho1). These databases were assessed for accessibility of information including viable links to SFWs and responsive program contacts. A Google search was used to identify SFWs not readily available on these national databases. SFWs were evaluated based on online education and recruitment content. Evaluators found 45 SFWs of 63 active programs (71%). Available SFWs were often not readily accessible from national program lists, and no program afforded a direct link to their SFW from SF Match. Approximately half of all programs responded via e-mail. Although many programs described surgical experience (91%) and research requirements (87%) during the fellowship, less than half mentioned didactic instruction (46%), journal clubs (41%), and national meetings or courses attended (28%). Evaluators found an average 45% of fellow recruitment content. Comparison of SFWs by program characteristics revealed three significant differences. Programs with greater than one fellowship position had greater online education content than programs with a single fellow (p=.022). Spine fellowships affiliated with an orthopedic residency program maintained greater education (p=.006) and recruitment (p=.046) content on their SFWs. Most orthopedic spine surgery programs underuse the Internet for fellow education and recruitment. The inaccessibility of information and paucity of content on SFWs allow for future opportunity to optimize these resources. Copyright © 2015 Elsevier Inc. All rights reserved.
Ethics across the computer science curriculum: privacy modules in an introductory database course.
Appel, Florence
2005-10-01
This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.
16 CFR § 1102.16 - Additional information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.16 Additional... in the Database any additional information it determines to be in the public interest, consistent...
16 CFR 1102.16 - Additional information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Content Requirements... notices, the CPSC shall include in the Database any additional information it determines to be in the...
PubChem BioAssay: A Decade's Development toward Open High-Throughput Screening Data Sharing.
Wang, Yanli; Cheng, Tiejun; Bryant, Stephen H
2017-07-01
High-throughput screening (HTS) is now routinely conducted for drug discovery by both pharmaceutical companies and screening centers at academic institutions and universities. Rapid advance in assay development, robot automation, and computer technology has led to the generation of terabytes of data in screening laboratories. Despite the technology development toward HTS productivity, fewer efforts were devoted to HTS data integration and sharing. As a result, the huge amount of HTS data was rarely made available to the public. To fill this gap, the PubChem BioAssay database ( https://www.ncbi.nlm.nih.gov/pcassay/ ) was set up in 2004 to provide open access to the screening results tested on chemicals and RNAi reagents. With more than 10 years' development and contributions from the community, PubChem has now become the largest public repository for chemical structures and biological data, which provides an information platform to worldwide researchers supporting drug development, medicinal chemistry study, and chemical biology research. This work presents a review of the HTS data content in the PubChem BioAssay database and the progress of data deposition to stimulate knowledge discovery and data sharing. It also provides a description of the database's data standard and basic utilities facilitating information access and use for new users.
Recent updates and developments to plant genome size databases
Garcia, Sònia; Leitch, Ilia J.; Anadon-Rosell, Alba; Canela, Miguel Á.; Gálvez, Francisco; Garnatje, Teresa; Gras, Airy; Hidalgo, Oriane; Johnston, Emmeline; Mas de Xaxars, Gemma; Pellicer, Jaume; Siljak-Yakovlev, Sonja; Vallès, Joan; Vitales, Daniel; Bennett, Michael D.
2014-01-01
Two plant genome size databases have been recently updated and/or extended: the Plant DNA C-values database (http://data.kew.org/cvalues), and GSAD, the Genome Size in Asteraceae database (http://www.asteraceaegenomesize.com). While the first provides information on nuclear DNA contents across land plants and some algal groups, the second is focused on one of the largest and most economically important angiosperm families, Asteraceae. Genome size data have numerous applications: they can be used in comparative studies on genome evolution, or as a tool to appraise the cost of whole-genome sequencing programs. The growing interest in genome size and increasing rate of data accumulation has necessitated the continued update of these databases. Currently, the Plant DNA C-values database (Release 6.0, Dec. 2012) contains data for 8510 species, while GSAD has 1219 species (Release 2.0, June 2013), representing increases of 17 and 51%, respectively, in the number of species with genome size data, compared with previous releases. Here we provide overviews of the most recent releases of each database, and outline new features of GSAD. The latter include (i) a tool to visually compare genome size data between species, (ii) the option to export data and (iii) a webpage containing information about flow cytometry protocols. PMID:24288377
Wind turbine reliability database update.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.
2009-03-01
This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.
Database Support for Research in Public Administration
ERIC Educational Resources Information Center
Tucker, James Cory
2005-01-01
This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…
Characteristics of Resources Represented in the OCLC CORC Database.
ERIC Educational Resources Information Center
Connell, Tschera Harkness; Prabha, Chandra
2002-01-01
Examines the characteristics of Web resources in Online Computer Library Center's (OCLC) Cooperative Online Resource Catalog (CORC) in terms of subject matter, source of content, publication patterns, and units of information chosen for representation in the database. Suggests that the ability to successfully use a database depends on…
Academic Journal Embargoes and Full Text Databases.
ERIC Educational Resources Information Center
Brooks, Sam
2003-01-01
Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…
Evaluation of personal digital assistant drug information databases for the managed care pharmacist.
Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W
2003-01-01
Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.
Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D
2015-01-01
Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create social awareness.
Windshear certification data base for forward-look detection systems
NASA Technical Reports Server (NTRS)
Switzer, George F.; Hinton, David A.; Proctor, Fred H.
1994-01-01
Described is an introduction to a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-looking windshear detection equipment. The database contains high-resolution three-dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies that are generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases contained in the certification documentation represent a wide spectrum of windshear events. The database will be used with vendor-developed sensor simulation software and vendor-collected ground-clutter data to demonstrate detection performance in a variety of meteorological conditions using NASA/FAA pre-defined path scenarios for each of the certification cases. A brief outline of the contents and sample plots from the database documentation are included. These plots show fields of hazard factor, or F-factor (Bowles 1990), radar reflectivity, and velocity vectors on a horizontal plane overlayed with the applicable certification paths. For the plot of the F-factor field the region of 0.105 and above signify an area of hazardous, performance decreasing windshear, while negative values indicate regions of performance increasing windshear. The values of F-factor are based on 1-Km averaged segments along horizontal flight paths, assuming an air speed of 150 knots (approx. 75 m/s). The database has been released to vendors participating in the certification process. The database and associated document have been transferred to the FAA for archival storage and distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karp, Peter D.
Pathway Tools is a systems-biology software package written by SRI International (SRI) that produces Pathway/Genome Databases (PGDBs) for organisms with a sequenced genome. Pathway Tools also provides a wide range of capabilities for analyzing predicted metabolic networks and user-generated omics data. More than 5,000 academic, industrial, and government groups have licensed Pathway Tools. This user community includes researchers at all three DOE bioenergy centers, as well as academic and industrial metabolic engineering (ME) groups. An integral part of the Pathway Tools software is MetaCyc, a large, multiorganism database of metabolic pathways and enzymes that SRI and its academic collaborators manuallymore » curate. This project included two main goals: I. Enhance the MetaCyc content of bioenergy-related enzymes and pathways. II. Develop computational tools for engineering metabolic pathways that satisfy specified design goals, in particular for bioenergy-related pathways. In part I, SRI proposed to significantly expand the coverage of bioenergy-related metabolic information in MetaCyc, followed by the generation of organism-specific PGDBs for all energy-relevant organisms sequenced at the DOE Joint Genome Institute (JGI). Part I objectives included: 1: Expand the content of MetaCyc to include bioenergy-related enzymes and pathways. 2: Enhance the Pathway Tools software to enable display of complex polymer degradation processes. 3: Create new PGDBs for the energy-related organisms sequenced by JGI, update existing PGDBs with new MetaCyc content, and make these data available to JBEI via the BioCyc website. In part II, SRI proposed to develop an efficient computational tool for the engineering of metabolic pathways. Part II objectives included: 4: Develop computational tools for generating metabolic pathways that satisfy specified design goals, enabling users to specify parameters such as starting and ending compounds, and preferred or disallowed intermediate compounds. The pathways were to be generated using metabolic reactions from a reference database (DB). 5: Develop computational tools for ranking the pathways generated in objective (4) according to their optimality. The ranking criteria include stoichiometric yield, the number and cost of additional inputs and the cofactor compounds required by the pathway, pathway length, and pathway energetics. 6: Develop tools for visualizing generated pathways to facilitate the evaluation of a large space of generated pathways.« less
Bloch-Mouillet, E
1999-01-01
This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.
Exploration of the Chemical Space of Public Genomic Databases
The current project aims to chemically index the content of public genomic databases to make these data accessible in relation to other publicly available, chemically-indexed toxicological information.
A cloud-based multimodality case file for mobile devices.
Balkman, Jason D; Loehfelm, Thomas W
2014-01-01
Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.
Bhagwat, Seema A; Haytowitz, David B; Wasswa-Kintu, Shirley I; Pehrsson, Pamela R
2015-08-14
The scientific community continues to be interested in potential links between flavonoid intakes and beneficial health effects associated with certain chronic diseases such as CVD, some cancers and type 2 diabetes. Three separate flavonoid databases (Flavonoids, Isoflavones and Proanthocyanidins) developed by the USDA Agricultural Research Service since 1999 with frequent updates have been used to estimate dietary flavonoid intakes, and investigate their health effects. However, each of these databases contains only a limited number of foods. The USDA has constructed a new Expanded Flavonoids Database for approximately 2900 commonly consumed foods, using analytical values from their existing flavonoid databases (Flavonoid Release 3.1 and Isoflavone Release 2.0) as the foundation to calculate values for all the twenty-nine flavonoid compounds included in these two databases. Thus, the new database provides full flavonoid profiles for twenty-nine predominant dietary flavonoid compounds for every food in the database. Original analytical values in Flavonoid Release 3.1 and Isoflavone Release 2.0 for corresponding foods were retained in the newly constructed database. Proanthocyanidins are not included in the expanded database. The process of formulating the new database includes various calculation techniques. This article describes the process of populating values for the twenty-nine flavonoid compounds for every food in the dataset, along with challenges encountered and resolutions suggested. The new expanded flavonoid database released on the Nutrient Data Laboratory's website would provide uniformity in estimations of flavonoid content in foods and will be a valuable tool for epidemiological studies to assess dietary intakes.
Paula, Débora P.; Linard, Benjamin; Crampton-Platt, Alex; Srivathsan, Amrita; Timmermans, Martijn J. T. N.; Sujii, Edison R.; Pires, Carmen S. S.; Souza, Lucas M.; Andow, David A.; Vogler, Alfried P.
2016-01-01
Characterizing trophic networks is fundamental to many questions in ecology, but this typically requires painstaking efforts, especially to identify the diet of small generalist predators. Several attempts have been devoted to develop suitable molecular tools to determine predatory trophic interactions through gut content analysis, and the challenge has been to achieve simultaneously high taxonomic breadth and resolution. General and practical methods are still needed, preferably independent of PCR amplification of barcodes, to recover a broader range of interactions. Here we applied shotgun-sequencing of the DNA from arthropod predator gut contents, extracted from four common coccinellid and dermapteran predators co-occurring in an agroecosystem in Brazil. By matching unassembled reads against six DNA reference databases obtained from public databases and newly assembled mitogenomes, and filtering for high overlap length and identity, we identified prey and other foreign DNA in the predator guts. Good taxonomic breadth and resolution was achieved (93% of prey identified to species or genus), but with low recovery of matching reads. Two to nine trophic interactions were found for these predators, some of which were only inferred by the presence of parasitoids and components of the microbiome known to be associated with aphid prey. Intraguild predation was also found, including among closely related ladybird species. Uncertainty arises from the lack of comprehensive reference databases and reliance on low numbers of matching reads accentuating the risk of false positives. We discuss caveats and some future prospects that could improve the use of direct DNA shotgun-sequencing to characterize arthropod trophic networks. PMID:27622637
NASA Astrophysics Data System (ADS)
Liu, S.; Wei, Y.; Post, W. M.; Cook, R. B.; Schaefer, K.; Thornton, M. M.
2013-05-01
The Unified North American Soil Map (UNASM) was developed to provide more accurate regional soil information for terrestrial biosphere modeling. The UNASM combines information from state-of-the-art US STATSGO2 and Soil Landscape of Canada (SLCs) databases. The area not covered by these datasets is filled by using the Harmonized World Soil Database version 1.21 (HWSD1.21). The UNASM contains maximum soil depth derived from the data source as well as seven soil attributes (including sand, silt, and clay content, gravel content, organic carbon content, pH, and bulk density) for the topsoil layer (0-30 cm) and the subsoil layer (30-100 cm), respectively, of the spatial resolution of 0.25 degrees in latitude and longitude. There are pronounced differences in the spatial distributions of soil properties and soil organic carbon between UNASM and HWSD, but the UNASM overall provides more detailed and higher-quality information particularly in Alaska and central Canada. To provide more accurate and up-to-date estimate of soil organic carbon stock in North America, we incorporated Northern Circumpolar Soil Carbon Database (NCSCD) into the UNASM. The estimate of total soil organic carbon mass in the upper 100 cm soil profile based on the improved UNASM is 365.96 Pg, of which 23.1% is under trees, 14.1% is in shrubland, and 4.6% is in grassland and cropland. This UNASM data will provide a resource for use in terrestrial ecosystem modeling both for input of soil characteristics and for benchmarking model output.
NASA Astrophysics Data System (ADS)
Liu, S.; Wei, Y.; Post, W. M.; Cook, R. B.; Schaefer, K.; Thornton, M. M.
2012-10-01
The Unified North American Soil Map (UNASM) was developed to provide more accurate regional soil information for terrestrial biosphere modeling. The UNASM combines information from state-of-the-art US STATSGO2 and Soil Landscape of Canada (SLCs) databases. The area not covered by these datasets is filled with the Harmonized World Soil Database version 1.1 (HWSD1.1). The UNASM contains maximum soil depth derived from the data source as well as seven soil attributes (including sand, silt, and clay content, gravel content, organic carbon content, pH, and bulk density) for the top soil layer (0-30 cm) and the sub soil layer (30-100 cm) respectively, of the spatial resolution of 0.25° in latitude and longitude. There are pronounced differences in the spatial distributions of soil properties and soil organic carbon between UNASM and HWSD, but the UNASM overall provides more detailed and higher-quality information particularly in Alaska and Central Canada. To provide more accurate and up-to-date estimate of soil organic carbon stock in North America, we incorporated Northern Circumpolar Soil Carbon Database (NCSCD) into the UNASM. The estimate of total soil organic carbon mass in the upper 100 cm soil profile based on the improved UNASM is 347.70 Pg, of which 24.7% is under trees, 14.2% is under shrubs, and 1.3% is under grasses and 3.8% under crops. This UNASM data will provide a resource for use in land surface and terrestrial biogeochemistry modeling both for input of soil characteristics and for benchmarking model output.
Content and Workflow Management for Library Websites: Case Studies
ERIC Educational Resources Information Center
Yu, Holly, Ed.
2005-01-01
Using database-driven web pages or web content management (WCM) systems to manage increasingly diverse web content and to streamline workflows is a commonly practiced solution recognized in libraries today. However, limited library web content management models and funding constraints prevent many libraries from purchasing commercially available…
Science information systems: Archive, access, and retrieval
NASA Technical Reports Server (NTRS)
Campbell, William J.
1991-01-01
The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.
ERIC Educational Resources Information Center
Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.
2011-01-01
Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…
Moore, Jeffrey C; Spink, John; Lipp, Markus
2012-04-01
Food ingredient fraud and economically motivated adulteration are emerging risks, but a comprehensive compilation of information about known problematic ingredients and detection methods does not currently exist. The objectives of this research were to collect such information from publicly available articles in scholarly journals and general media, organize into a database, and review and analyze the data to identify trends. The results summarized are a database that will be published in the US Pharmacopeial Convention's Food Chemicals Codex, 8th edition, and includes 1305 records, including 1000 records with analytical methods collected from 677 references. Olive oil, milk, honey, and saffron were the most common targets for adulteration reported in scholarly journals, and potentially harmful issues identified include spices diluted with lead chromate and lead tetraoxide, substitution of Chinese star anise with toxic Japanese star anise, and melamine adulteration of high protein content foods. High-performance liquid chromatography and infrared spectroscopy were the most common analytical detection procedures, and chemometrics data analysis was used in a large number of reports. Future expansion of this database will include additional publically available articles published before 1980 and in other languages, as well as data outside the public domain. The authors recommend in-depth analyses of individual incidents. This report describes the development and application of a database of food ingredient fraud issues from publicly available references. The database provides baseline information and data useful to governments, agencies, and individual companies assessing the risks of specific products produced in specific regions as well as products distributed and sold in other regions. In addition, the report describes current analytical technologies for detecting food fraud and identifies trends and developments. © 2012 US Pharmacupia Journal of Food Science © 2012 Institute of Food Technologistsreg;
Exploration of the Chemical Space of Public Genomic ...
The current project aims to chemically index the content of public genomic databases to make these data accessible in relation to other publicly available, chemically-indexed toxicological information. By evaluating the chemical space of public genomic data in relation to public toxicological data, it is possible to identify classes of chemicals on which to develop methodologies for the integration of chemogenomic data into predictive toxicology.
Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivek Agarwal; Nancy J. Lybeck; Randall Bickford
2014-06-01
Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less
POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.
Dayringer, H E; Sammons, S A
1991-04-01
Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.
USDA-ARS?s Scientific Manuscript database
Epidemiologic studies show inverse associations between flavonoid intake and chronic disease risk. However, a lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intake. Using a newly released database of flavonoid values, we soug...
ERIC Educational Resources Information Center
Turner, Laura
2001-01-01
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum
This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.
Ku, Ho Suk; Kim, Sungho; Kim, HyeHyeon; Chung, Hee-Joon; Park, Yu Rang; Kim, Ju Han
2014-04-01
Health Avatar Beans was for the management of chronic kidney disease and end-stage renal disease (ESRD). This article is about the DialysisNet system in Health Avatar Beans for the seamless management of ESRD based on the personal health record. For hemodialysis data modeling, we identified common data elements for hemodialysis information (CDEHI). We used ASTM continuity of care record (CCR) and ISO/IEC 11179 for the compliance method with a standard model for the CDEHI. According to the contents of the ASTM CCR, we mapped the CDHEI to the contents and created the metadata from that. It was transformed and parsed into the database and verified according to the ASTM CCR/XML schema definition (XSD). DialysisNet was created as an iPad application. The contents of the CDEHI were categorized for effective management. For the evaluation of information transfer, we used CarePlatform, which was developed for data access. The metadata of CDEHI in DialysisNet was exchanged by the CarePlatform with semantic interoperability. The CDEHI was separated into a content list for individual patient data, a contents list for hemodialysis center data, consultation and transfer form, and clinical decision support data. After matching to the CCR, the CDEHI was transformed to metadata, and it was transformed to XML and proven according to the ASTM CCR/XSD. DialysisNet has specific consideration of visualization, graphics, images, statistics, and database. We created the DialysisNet application, which can integrate and manage data sources for hemodialysis information based on CCR standards.
Enabling search over encrypted multimedia databases
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Swaminathan, Ashwin; Varna, Avinash L.; Wu, Min
2009-02-01
Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
A COSTAR interface using WWW technology.
Rabbani, U.; Morgan, M.; Barnett, O.
1998-01-01
The concentration of industry on modern relational databases has left many nonrelational and proprietary databases without support for integration with new technologies. Emerging interface tools and data-access methodologies can be applied with difficulty to medical record systems which have proprietary data representation. Users of such medical record systems usually must access the clinical content of such record systems with keyboard-intensive and time-consuming interfaces. COSTAR is a legacy ambulatory medical record system developed over 25 years ago that is still popular and extensively used at the Massachusetts General Hospital. We define a model for using middle layer services to extract and cache data from non-relational databases, and present an intuitive World-Wide Web interface to COSTAR. This model has been implemented and successfully piloted in the Internal Medicine Associates at Massachusetts General Hospital. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9929310
Chado controller: advanced annotation management with a community annotation system.
Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie
2012-04-01
We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary data are available at Bioinformatics online.
Content Is King: Databases Preserve the Collective Information of Science.
Yates, John R
2018-04-01
Databases store sequence information experimentally gathered to create resources that further science. In the last 20 years databases have become critical components of fields like proteomics where they provide the basis for large-scale and high-throughput proteomic informatics. Amos Bairoch, winner of the Association of Biomolecular Resource Facilities Frederick Sanger Award, has created some of the important databases proteomic research depends upon for accurate interpretation of data.
Vibrating-Wire, Supercooled Liquid Water Content Sensor Calibration and Characterization Progress
NASA Technical Reports Server (NTRS)
King, Michael C.; Bognar, John A.; Guest, Daniel; Bunt, Fred
2016-01-01
NASA conducted a winter 2015 field campaign using weather balloons at the NASA Glenn Research Center to generate a validation database for the NASA Icing Remote Sensing System. The weather balloons carried a specialized, disposable, vibrating-wire sensor to determine supercooled liquid water content aloft. Significant progress has been made to calibrate and characterize these sensors. Calibration testing of the vibrating-wire sensors was carried out in a specially developed, low-speed, icing wind tunnel, and the results were analyzed. The sensor ice accretion behavior was also documented and analyzed. Finally, post-campaign evaluation of the balloon soundings revealed a gradual drift in the sensor data with increasing altitude. This behavior was analyzed and a method to correct for the drift in the data was developed.
NASA Astrophysics Data System (ADS)
Lahinta, A.; Haris, I.; Abdillah, T.
2017-03-01
The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.
Nutrient estimation from an FFQ developed for a black Zimbabwean population
Merchant, Anwar T; Dehghan, Mahshid; Chifamba, Jephat; Terera, Getrude; Yusuf, Salim
2005-01-01
Background There is little information in the literature on methods of food composition database development to calculate nutrient intake from food frequency questionnaire (FFQ) data. The aim of this study is to describe the development of an FFQ and a food composition table to calculate nutrient intake in a Black Zimbabwean population. Methods Trained interviewers collected 24-hour dietary recalls (24 hr DR) from high and low income families in urban and rural Zimbabwe. Based on these data and input from local experts we developed an FFQ, containing a list of frequently consumed foods, standard portion sizes, and categories of consumption frequency. We created a food composition table of the foods found in the FFQ so that we could compute nutrient intake. We used the USDA nutrient database as the main resource because it is relatively complete, updated, and easily accessible. To choose the food item in the USDA nutrient database that most closely matched the nutrient content of the local food we referred to a local food composition table. Results Almost all the participants ate sadza (maize porridge) at least 5 times a week, and about half had matemba (fish) and caterpillar more than once a month. Nutrient estimates obtained from the FFQ data by using the USDA and Zimbabwean food composition tables were similar for total energy intake intra class correlation (ICC) = 0.99, and carbohydrate (ICC = 0.99), but different for vitamin A (ICC = 0.53), and total folate (ICC = 0.68). Conclusion We have described a standardized process of FFQ and food composition database development for a Black Zimbabwean population. PMID:16351722
Joseph‐Williams, Natalie; Edwards, Adrian; Elwyn, Glyn
2011-01-01
Abstract Background or context Regret is a common consequence of decisions, including those decisions related to individuals’ health. Several assessment instruments have been developed that attempt to measure decision regret. However, recent research has highlighted the complexity of regret. Given its relevance to shared decision making, it is important to understand its conceptualization and the instruments used to measure it. Objectives To review current conceptions of regret. To systematically identify instruments used to measure decision regret and assess whether they capture recent conceptualizations of regret. Search strategy Five electronic databases were searched in 2008. Search strategies used a combination of MeSH terms (or database equivalent) and free text searching under the following key headings: ‘Decision’ and ‘regret’ and ‘measurement’. Follow‐up manual searches were also performed. Inclusion criteria Articles were included if they reported the development and psychometric testing of an instrument designed to measure decision regret, or the use of a previously developed and tested instrument. Main results Thirty‐two articles were included: 10 report the development and validation of an instrument that measures decision regret and 22 report the use of a previously developed and tested instrument. Content analysis found that existing instruments for the measurement of regret do not capture current conceptualizations of regret and they do not enable the construct of regret to be measured comprehensively. Conclusions Existing instrumentation requires further development. There is also a need to clarify the purpose for using regret assessment instruments as this will, and should, focus their future application. PMID:20860776
Sugar composition of French royal jelly for comparison with commercial and artificial sugar samples.
Daniele, Gaëlle; Casabianca, Hervé
2012-09-15
A gas chromatographic method was developed to quantify the major and minor sugars of 400 Royal Jellies (RJs). Their contents were compared in relation to the geographical origins and different production methods. A reliable database was established from the analysis of 290 RJs harvested in different French areas that took into account the diversity of geographical origin, harvesting season, forage sources available in the environment corresponding to natural food of the bees: pollen and nectar. Around 30 RJ samples produced by Italian beekeepers, about sixty-ones from French market, and around thirty-ones derived from feeding experiments were analysed and compared with our database. Fructose and glucose contents are in the range 2.3-7.8% and 3.4-7.7%, respectively, whatever the RJ's origin. On the contrary, differences in minor sugar composition are observed. Indeed sucrose and erlose contents in French RJs are lesser than 1.7% and 0.3%, respectively, whereas they reach 3.9% and 2.0% in some commercial samples and 5.1% and 1.7% in RJs produced from feeding experiments. This study could be used to discriminate different production methods and provide an additional tool for identifying unknown commercial RJs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W.; Poore, Gary C. B.; van Soest, Rob W. M.; Stöhr, Sabine; Walter, T. Chad; Vanhoorne, Bart; Decock, Wim
2013-01-01
The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management. PMID:23505408
Costello, Mark J; Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W; Poore, Gary C B; van Soest, Rob W M; Stöhr, Sabine; Walter, T Chad; Vanhoorne, Bart; Decock, Wim; Appeltans, Ward
2013-01-01
The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management.
Uses of the Drupal CMS Collaborative Framework in the Woods Hole Scientific Community (Invited)
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T. T.; Shorthouse, D.; Furfey, J.; Miller, H.
2010-12-01
Organizations that comprise the Woods Hole scientific community (Woods Hole Oceanographic Institution, Marine Biological Laboratory, USGS Woods Hole Coastal and Marine Science Center, Woods Hole Research Center, NOAA NMFS Northeast Fisheries Science Center, SEA Education Association) have a long history of collaborative activity regarding computing, computer network and information technologies that support common, inter-disciplinary science needs. Over the past several years there has been growing interest in the use of the Drupal Content Management System (CMS) playing a variety of roles in support of research projects resident at several of these organizations. Many of these projects are part of science programs that are national and international in scope. Here we survey the current uses of Drupal within the Woods Hole scientific community and examine reasons it has been adopted. The promise of emerging semantic features in the Drupal framework is examined and projections of how pre-existing Drupal-based websites might benefit are made. Closer examination of Drupal software design exposes it as more than simply a content management system. The flexibility of its architecture; the power of its taxonomy module; the care taken in nurturing the open-source developer community that surrounds it (including organized and often well-attended code sprints); the ability to bind emerging software technologies as Drupal modules; the careful selection process used in adopting core functionality; multi-site hosting and cross-site deployment of updates and a recent trend towards development of use-case inspired Drupal distributions casts Drupal as a general-purpose application deployment framework. Recent work in the semantic arena casts Drupal as an emerging RDF framework as well. Examples of roles played by Drupal-based websites within the Woods Hole scientific community that will be discussed include: science data metadata database, organization main website, biological taxonomy development, bibliographic database, physical media data archive inventory manager, disaster-response website development framework, science project task management, science conference planning, and spreadsheet-to-database converter.
IDD Info: a software to manage surveillance data of Iodine Deficiency Disorders.
Liu, Peng; Teng, Bai-Jun; Zhang, Shu-Bin; Su, Xiao-Hui; Yu, Jun; Liu, Shou-Jun
2011-08-01
IDD info, a new software for managing survey data of Iodine Deficiency Disorders (IDD), is presented in this paper. IDD Info aims to create IDD project databases, process, analyze various national or regional surveillance data and form final report. It has series measures of choosing database from existing ones, revising it, choosing indicators from pool to establish database and adding indicators to pool. It also provides simple tools to scan one database and compare two databases, to set IDD standard parameters, to analyze data by single indicator and multi-indicators, and finally to form typeset report with content customized. IDD Info was developed using Chinese national IDD surveillance data of 2005. Its validity was evaluated by comparing with survey report given by China CDC. The IDD Info is a professional analysis tool, which succeeds in speeding IDD data analysis up to about 14.28% with respect to standard reference routines. It consequently enhances analysis performance and user compliance. IDD Info is a practical and accurate means of managing the multifarious IDD surveillance data that can be widely used by non-statisticians in national and regional IDD surveillance. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Belmonte, M
In this article we review two of the main Internet information services for seeking references to bibliography and journals, and the electronic publications on the Internet, with particular emphasis on those related to neurosciencs. The main indices of bibliography are: 1. MEDLINE. By definition, this is the bibliography database. It is an 'on line' version of the magazine with a smaller format, published weekly with the title pages and summaries of most of the biomedical journals. It is based on the Index Medicus, a bibliographic index (on paper) which annually collects references to the most important biomedical journals. 2. EMBASE (Excerpta Medica). It is a direct competitor to MEDLINE, although it has the disadvantage of lack of government subsidies and is privately financed only. This bibliographic database, produced by the publishers Elsevier of Holland, covers approximately 3,500 biomedical journals from 110 countries, and is particularly useful for articles on drugs and toxicology. 3. Current Contents. It publishes the index Current Contents, a classic in this field, much appreciated by scientists in all areas: medicine, social, technology, arts and humanities. At present, it is available in an on line version known as CCC (Current Contents Connect), accessible through the web, but only to subscribers. There is a growing tendency towards the publication of biomedical journals on the Internet. Its full development, if correctly carried out, will mean the opportunity to have the best information available and will result in great benefit to all those who are already using new information technology.
ERIC Educational Resources Information Center
Hoover, Ryan E.
This study examines (1) subject content, (2) file size, (3) types of documents indexed, (4) range of years spanned, and (5) level of indexing and abstracting in five databases which collectively provide extensive coverage of the forestry and forest products industries: AGRICOLA, CAB ABSTRACTS, FOREST PRODUCTS (AIDS), PAPERCHEM, and PIRA. The…
DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS ...
DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction ModelsAnn M. RichardUS Environmental Protection Agency, Research Triangle Park, NC, USADistributed: Decentralized set of standardized, field-delimited databases, each separatelyauthored and maintained, that are able to accommodate diverse toxicity data content;Structure-Searchable: Standard format (SDF) structure-data files that can be readily imported into available chemical relational databases and structure-searched;Tox: Toxicity data as it exists in widely disparate forms in current public databases, spanning diverse toxicity endpoints, test systems, levels of biological content, degrees of summarization, and information content.INTRODUCTIONThe economic and social pressures to reduce the need for animal testing and to better anticipate the potential for human and eco-toxicity of environmental, industrial, or pharmaceutical chemicals are as pressing today as at any time prior. However, the goal of predicting chemical toxicity in its many manifestations, the `T' in 'ADMET' (adsorption, distribution, metabolism, elimination, toxicity), remains one of the most difficult and largely unmet challenges in a chemical screening paradigm [1]. It is widely acknowledged that the single greatest hurdle to improving structure-activity relationship (SAR) toxicity prediction capabilities, in both the pharmaceutical and environmental regulation arenas, is the lack of suffici
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
A Tool for Conditions Tag Management in ATLAS
NASA Astrophysics Data System (ADS)
Sharmazanashvili, A.; Batiashvili, G.; Gvaberidze, G.; Shekriladze, L.; Formica, A.; Atlas Collaboration
2014-06-01
ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to them is crucial for all aspects of ATLAS data taking and analysis, as well as by preceding tasks to derive optimal corrections to reconstruction. Optimized sets of conditions for processing are accomplished using strict version control on those conditions: a process which assigns COOL Tags to sets of conditions, and then unifies those conditions over data-taking intervals into a COOL Global Tag. This Global Tag identifies the set of conditions used to process data so that the underlying conditions can be uniquely identified with 100% reproducibility should the processing be executed again. Understanding shifts in the underlying conditions from one tag to another and ensuring interval completeness for all detectors for a set of runs to be processed is a complex task, requiring tools beyond the above mentioned python utilities. Therefore, a JavaScript /PHP based utility called the Conditions Tag Browser (CTB) has been developed. CTB gives detector and conditions experts the possibility to navigate through the different databases and COOL folders; explore the content of given tags and the differences between them, as well as their extent in time; visualize the content of channels associated with leaf tags. This report describes the structure and PHP/ JavaScript classes of functions of the CTB.
David N. Bengston; David P. Fan
1999-01-01
An indicator of the level of conflict over natural resource management was developed and applied to the case of U.S. national forest policy and management. Computer-coded content analysis was used to identify expressions of conflict in a national database of almost 10,000 news media stories about the U.S. Forest Service. Changes in the amount of news media discussion...
ERIC Educational Resources Information Center
Spence, Michelle; Mawhinney, Tara; Barsky, Eugene
2012-01-01
Science and engineering libraries have an important role to play in preserving the intellectual content in research areas of the departments they serve. This study employs bibliographic data from the Web of Science database to examine how much research material is required to cover 90% of faculty citations in civil engineering and computer…
Using Object-Oriented Databases for Implementation of Interactive Electronic Technical Manuals
1992-03-01
analytical process applied throughout the system acquisition program in order to define supportability related design factors and to ensure development of a...Node Alternatives Node Alternatives (NODEALTS) is a list of mutually exclusive nodes, grouped together by the fact that they apply to different...contextual situations. The content specific layer NODEALTS element is a reference to a set of nodes that might apply in different situations. No hierarchy
Ballemans, Judith; Kempen, Gertrudis IJM; Zijlstra, GA Rixt
2011-01-01
Objective: This study aimed to provide an overview of the development, content, feasibility, and effectiveness of existing orientation and mobility training programmes in the use of the identification cane. Data sources: A systematic bibliographic database search in PubMed, PsychInfo, ERIC, CINAHL and the Cochrane Library was performed, in combination with the expert consultation (n = 42; orientation and mobility experts), and hand-searching of reference lists. Review methods: Selection criteria included a description of the development, the content, the feasibility, or the effectiveness of orientation and mobility training in the use of the identification cane. Two reviewers independently agreed on eligibility and methodological quality. A narrative/qualitative data analysis method was applied to extract data from obtained documents. Results: The sensitive database search and hand-searching of reference lists revealed 248 potentially relevant abstracts. None met the eligibility criteria. Expert consultation resulted in the inclusion of six documents in which the information presented on the orientation and mobility training in the use of the identification cane was incomplete and of low methodological quality. Conclusion: Our review of the literature showed a lack of well-described protocols and studies on orientation and mobility training in identification cane use. PMID:21795405
USDA-ARS?s Scientific Manuscript database
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 Standard Reference 26 (SR 26) database. Sentinel Foods are foods and beverages identified by USDA to be monitored as primary indicat...
ERIC: Sphinx or Golden Griffin?
ERIC Educational Resources Information Center
Lopez, Manuel D.
1989-01-01
Evaluates the Educational Resources Information Center (ERIC) database. Summarizes ERIC's history and organization, and discusses criticisms concerning access, currency, and database content. Reviews role of component clearinghouses, indexing practices, thesaurus structure, international coverage, and comparative studies. Finds ERIC a valuable…
The Design and Product of National 1:1000000 Cartographic Data of Topographic Map
NASA Astrophysics Data System (ADS)
Wang, Guizhi
2016-06-01
National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.
Riebschleger, Joanne; Onaga, Esther; Tableman, Betty; Bybee, Deborah
2014-09-01
This research explores consumer parents' recommendations for developing psychoeducation programs for their minor children. Data were drawn from a purposive sample of 3 focus groups of parent consumers of a community mental health agency. The research question was: "What do consumer parents recommend for developing psychoeducation programs for their minor children?" Parents recommended content foci of mental illness, recovery, heritability, stigma, and coping. The next step is youth psychoeducation intervention development and evaluation. Parents, youth, and professionals should be included in the program planning. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Shishi; Wei, Yaxing; Post, Wilfred M
2013-01-01
The Unified North American Soil Map (UNASM) was developed to provide more accurate regional soil information for terrestrial biosphere modeling. The UNASM combines information from state-of-the-art U.S. STATSGO2 and Soil Landscape of Canada (SLCs) databases. The area not covered by these datasets is filled with the Harmonized World Soil Database version 1.1 (HWSD1.1). The UNASM contains maximum soil depth derived from the data source as well as seven soil attributes (including sand, silt, and clay content, gravel content, organic carbon content, pH, and bulk density) for the top soil layer (0-30 cm) and the sub soil layer (30-100 cm) respectively,more » of the spatial resolution of 0.25 degrees in latitude and longitude. There are pronounced differences in the spatial distributions of soil properties and soil organic carbon between UNASM and HWSD, but the UNASM overall provides more detailed and higher-quality information particularly in Alaska and central Canada. To provide more accurate and up-to-date estimate of soil organic carbon stock in North America, we incorporated Northern Circumpolar Soil Carbon Database (NCSCD) into the UNASM. The estimate of total soil organic carbon mass in the upper 100 cm soil profile based on the improved UNASM is 347.70 Pg, of which 24.7% is under trees, 14.2% is under shrubs, and 1.3% is under grasses and 3.8% under crops. This UNASM data will provide a resource for use in land surface and terrestrial biogeochemistry modeling both for input of soil characteristics and for benchmarking model output.« less
NASA Astrophysics Data System (ADS)
Wei, Y.; Liu, S.; Huntzinger, D. N.; Michalak, A. M.; Post, W. M.; Cook, R. B.; Schaefer, K. M.; Thornton, M.
2014-12-01
The Unified North American Soil Map (UNASM) was developed by Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) to provide more accurate regional soil information for terrestrial biosphere modeling. The UNASM combines information from state-of-the-art US STATSGO2 and Soil Landscape of Canada (SLCs) databases. The area not covered by these datasets is filled by using the Harmonized World Soil Database version 1.21 (HWSD1.21). The UNASM contains maximum soil depth derived from the data source as well as seven soil attributes (including sand, silt, and clay content, gravel content, organic carbon content, pH, and bulk density) for the topsoil layer (0-30 cm) and the subsoil layer (30-100 cm), respectively, of the spatial resolution of 0.25 degrees in latitude and longitude. There are pronounced differences in the spatial distributions of soil properties and soil organic carbon between UNASM and HWSD, but the UNASM overall provides more detailed and higher-quality information particularly in Alaska and central Canada. To provide more accurate and up-to-date estimate of soil organic carbon stock in North America, we incorporated Northern Circumpolar Soil Carbon Database (NCSCD) into the UNASM. The estimate of total soil organic carbon mass in the upper 100 cm soil profile based on the improved UNASM is 365.96 Pg, of which 23.1% is under trees, 14.1% is in shrubland, and 4.6% is in grassland and cropland. This UNASM data has been provided as a resource for use in terrestrial ecosystem modeling of MsTMIP both for input of soil characteristics and for benchmarking model output.
Multiple-Feature Extracting Modules Based Leak Mining System Design
Cho, Ying-Chiang; Pan, Jen-Yi
2013-01-01
Over the years, human dependence on the Internet has increased dramatically. A large amount of information is placed on the Internet and retrieved from it daily, which makes web security in terms of online information a major concern. In recent years, the most problematic issues in web security have been e-mail address leakage and SQL injection attacks. There are many possible causes of information leakage, such as inadequate precautions during the programming process, which lead to the leakage of e-mail addresses entered online or insufficient protection of database information, a loophole that enables malicious users to steal online content. In this paper, we implement a crawler mining system that is equipped with SQL injection vulnerability detection, by means of an algorithm developed for the web crawler. In addition, we analyze portal sites of the governments of various countries or regions in order to investigate the information leaking status of each site. Subsequently, we analyze the database structure and content of each site, using the data collected. Thus, we make use of practical verification in order to focus on information security and privacy through black-box testing. PMID:24453892
Multiple-feature extracting modules based leak mining system design.
Cho, Ying-Chiang; Pan, Jen-Yi
2013-01-01
Over the years, human dependence on the Internet has increased dramatically. A large amount of information is placed on the Internet and retrieved from it daily, which makes web security in terms of online information a major concern. In recent years, the most problematic issues in web security have been e-mail address leakage and SQL injection attacks. There are many possible causes of information leakage, such as inadequate precautions during the programming process, which lead to the leakage of e-mail addresses entered online or insufficient protection of database information, a loophole that enables malicious users to steal online content. In this paper, we implement a crawler mining system that is equipped with SQL injection vulnerability detection, by means of an algorithm developed for the web crawler. In addition, we analyze portal sites of the governments of various countries or regions in order to investigate the information leaking status of each site. Subsequently, we analyze the database structure and content of each site, using the data collected. Thus, we make use of practical verification in order to focus on information security and privacy through black-box testing.
The Cambridge Structural Database
Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.
2016-01-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719
Parkhill, Anne; Hill, Kelvin
2009-03-01
The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. To identify the relative effectiveness of differing evidence sources for the guideline update. We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.
Flow unsteadiness effects on boundary layers
NASA Technical Reports Server (NTRS)
Murthy, Sreedhara V.
1989-01-01
The development of boundary layers at high subsonic speeds in the presence of either mass flux fluctuations or acoustic disturbances (the two most important parameters in the unsteadiness environment affecting the aerodynamics of a flight vehicle) was investigated. A high quality database for generating detailed information concerning free-stream flow unsteadiness effects on boundary layer growth and transition in high subsonic and transonic speeds is described. The database will be generated with a two-pronged approach: (1) from a detailed review of existing literature on research and wind tunnel calibration database, and (2) from detailed tests in the Boundary Layer Apparatus for Subsonic and Transonic flow Affected by Noise Environment (BLASTANE). Special instrumentation, including hot wire anemometry, the buried wire gage technique, and laser velocimetry were used to obtain skin friction and turbulent shear stress data along the entire boundary layer for various free stream noise levels, turbulence content, and pressure gradients. This database will be useful for improving the correction methodology of applying wind tunnel test data to flight predictions and will be helpful for making improvements in turbulence modeling laws.
The Cambridge Structural Database.
Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C
2016-04-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.
Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian
2014-01-01
Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant’s linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance. PMID:25667563
Kleinboelting, Nils; Huep, Gunnar; Weisshaar, Bernd
2017-01-01
SimpleSearch provides access to a database containing information about T-DNA insertion lines of the GABI-Kat collection of Arabidopsis thaliana mutants. These mutants are an important tool for reverse genetics, and GABI-Kat is the second largest collection of such T-DNA insertion mutants. Insertion sites were deduced from flanking sequence tags (FSTs), and the database contains information about mutant plant lines as well as insertion alleles. Here, we describe improvements within the interface (available at http://www.gabi-kat.de/db/genehits.php) and with regard to the database content that have been realized in the last five years. These improvements include the integration of the Araport11 genome sequence annotation data containing the recently updated A. thaliana structural gene descriptions, an updated visualization component that displays groups of insertions with very similar insertion positions, mapped confirmation sequences, and primers. The visualization component provides a quick way to identify insertions of interest, and access to improved data about the exact structure of confirmed insertion alleles. In addition, the database content has been extended by incorporating additional insertion alleles that were detected during the confirmation process, as well as by adding new FSTs that have been produced during continued efforts to complement gaps in FST availability. Finally, the current database content regarding predicted and confirmed insertion alleles as well as primer sequences has been made available as downloadable flat files. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
A content review of cognitive process measures used in pain research within adult populations.
Day, M A; Lang, C P; Newton-John, T R O; Ehde, D M; Jensen, M P
2017-01-01
Previous research suggests that measures of cognitive process may be confounded by the inclusion of items that also assess cognitive content. The primary aims of this content review were to: (1) identify the domains of cognitive processes assessed by measures used in pain research; and (2) determine if pain-specific cognitive process measures with adequate psychometric properties exist. PsychInfo, CINAHL, PsycArticles, MEDLINE, and Academic Search Complete databases were searched to identify the measures of cognitive process used in pain research. Identified measures were double coded and the measure's items were rated as: (1) cognitive content; (2) cognitive process; (3) behavioural/social; and/or (4) emotional coping/responses to pain. A total of 319 scales were identified; of these, 29 were coded as providing an un-confounded assessment of cognitive process, and 12 were pain-specific. The cognitive process domains assessed in these measures are Absorption, Dissociation, Reappraisal, Distraction/Suppression, Acceptance, Rumination, Non-Judgment, and Enhancement. Pain-specific, un-confounded measures were identified for: Dissociation, Reappraisal, Distraction/Suppression, and Acceptance. Psychometric properties of all 319 scales are reported in supplementary material. To understand the importance of cognitive processes in influencing pain outcomes as well as explaining the efficacy of pain treatments, valid and pain-specific cognitive process measures that are not confounded with non-process domains (e.g., cognitive content) are needed. The findings of this content review suggest that future research focused on developing cognitive process measures is critical in order to advance our understanding of the mechanisms that underlie effective pain treatment. Many cognitive process measures used in pain research contain a 'mix' of items that assess cognitive process, cognitive content, and behavioural/emotional responses. Databases searched: PsychInfo, CINAHL, PsycArticles, MEDLINE and Academic Search Complete. This review describes the domains assessed by measures assessing cognitive processes in pain research, as well as the strengths and limitations of these measures. © 2016 European Pain Federation - EFIC®.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
Green, Jason M.; Harnsomburana, Jaturon; Schaeffer, Mary L.; Lawrence, Carolyn J.; Shyu, Chi-Ren
2011-01-01
Model Organism Databases, including the various plant genome databases, collect and enable access to massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc, as well as textual descriptions of many of these entities. While a variety of basic browsing and search capabilities are available to allow researchers to query and peruse the names and attributes of phenotypic data, next-generation search mechanisms that allow querying and ranking of text descriptions are much less common. In addition, the plant community needs an innovative way to leverage the existing links in these databases to search groups of text descriptions simultaneously. Furthermore, though much time and effort have been afforded to the development of plant-related ontologies, the knowledge embedded in these ontologies remains largely unused in available plant search mechanisms. Addressing these issues, we have developed a unique search engine for mutant phenotypes from MaizeGDB. This advanced search mechanism integrates various text description sources in MaizeGDB to aid a user in retrieving desired mutant phenotype information. Currently, descriptions of mutant phenotypes, loci and gene products are utilized collectively for each search, though expansion of the search mechanism to include other sources is straightforward. The retrieval engine, to our knowledge, is the first engine to exploit the content and structure of available domain ontologies, currently the Plant and Gene Ontologies, to expand and enrich retrieval results in major plant genomic databases. Database URL: http:www.PhenomicsWorld.org/QBTA.php PMID:21558151
Development of a medical module for disaster information systems.
Calik, Elif; Atilla, Rıdvan; Kaya, Hilal; Aribaş, Alirıza; Cengiz, Hakan; Dicle, Oğuz
2014-01-01
This study aims to improve a medical module which provides a real-time medical information flow about pre-hospital processes that gives health care in disasters; transferring, storing and processing the records that are in electronic media and over internet as a part of disaster information systems. In this study which is handled within the frame of providing information flow among professionals in a disaster case, to supply the coordination of healthcare team and transferring complete information to specified people at real time, Microsoft Access database and SQL query language were used to inform database applications. System was prepared on Microsoft .Net platform using C# language. Disaster information system-medical module was designed to be used in disaster area, field hospital, nearby hospitals, temporary inhabiting areas like tent city, vehicles that are used for dispatch, and providing information flow between medical officials and data centres. For fast recording of the disaster victim data, accessing to database which was used by health care professionals was provided (or granted) among analysing process steps and creating minimal datasets. Database fields were created in the manner of giving opportunity to enter new data and search old data which is recorded before disaster. Web application which provides access such as data entry to the database and searching towards the designed interfaces according to the login credentials access level. In this study, homepage and users' interfaces which were built on database in consequence of system analyses were provided with www.afmedinfo.com web site to the user access. With this study, a recommendation was made about how to use disaster-based information systems in the field of health. Awareness has been developed about the fact that disaster information system should not be perceived only as an early warning system. Contents and the differences of the health care practices of disaster information systems were revealed. A web application was developed supplying a link between the user and the database to make date entry and data query practices by the help of the developed interfaces.
eHealth Networking Information Systems - The New Quality of Information Exchange.
Messer-Misak, Karin; Reiter, Christoph
2017-01-01
The development and introduction of platforms that enable interdisciplinary exchange on current developments and projects in the area of eHealth have been stimulated by different authorities. The aim of this project was to develop a repository of eHealth projects that will make the wealth of eHealth projects visible and enable mutual learning through the sharing of experiences and good practice. The content of the database and search criteria as well as their categories were determined in close co-ordination and cooperation with stakeholders from the specialist areas. Technically, we used Java Server Faces (JSF) for the implementation of the frontend of the web application. Access to structured information on projects can support stakeholders to combining skills and knowledge residing in different places to create new solutions and approaches within a network of evolving competencies and opportunities. A regional database is the beginning of a structured collection and presentation of projects, which can then be incorporated into a broader context. The next step will be to unify this information transparently.
A systematic review of publications studies on medical tourism.
Masoud, Ferdosi; Alireza, Jabbari; Mahmoud, Keyvanara; Zahra, Agharahimi
2013-01-01
Medical tourism for any study area is complex. Using full articles from other databases, Institute for Scientific Information (ISI), Science Direct, Emerald, Oxford, Magiran, and Scientific Information Database (SID), to examine systematically published articles about medical tourism in the interval 2000-2011 paid. Articles were obtained using descriptive statistics and content analysis categories were analyzed. Among the 28 articles reviewed, 11 cases were a kind of research articles, three cases were case studies in Mexico, India, Hungary, Germany, and Iran, and 14 were case studies, review documents and data were passed. The main topics of study included the definition of medical tourism, medical tourists' motivation and development of medical tourism, ethical issues in medical tourism, and impact on health and medical tourism marketing. The findings indicate the definition of medical tourism in various articles, and medical tourists are motivated. However, most studies indicate the benefits of medical tourism in developing countries and more developed countries reflect the consequences of medical tourism.
A systematic review of publications studies on medical tourism
Masoud, Ferdosi; Alireza, Jabbari; Mahmoud, Keyvanara; Zahra, Agharahimi
2013-01-01
Introduction: Medical tourism for any study area is complex. Materials and Methods: Using full articles from other databases, Institute for Scientific Information (ISI), Science Direct, Emerald, Oxford, Magiran, and Scientific Information Database (SID), to examine systematically published articles about medical tourism in the interval 2000-2011 paid. Articles were obtained using descriptive statistics and content analysis categories were analyzed. Results: Among the 28 articles reviewed, 11 cases were a kind of research articles, three cases were case studies in Mexico, India, Hungary, Germany, and Iran, and 14 were case studies, review documents and data were passed. The main topics of study included the definition of medical tourism, medical tourists’ motivation and development of medical tourism, ethical issues in medical tourism, and impact on health and medical tourism marketing. Conclusion: The findings indicate the definition of medical tourism in various articles, and medical tourists are motivated. However, most studies indicate the benefits of medical tourism in developing countries and more developed countries reflect the consequences of medical tourism. PMID:24251287
Content-based video indexing and searching with wavelet transformation
NASA Astrophysics Data System (ADS)
Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah
2006-05-01
Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.
Suicide reporting content analysis: abstract development and reliability.
Gould, Madelyn S; Midle, Jennifer Bassett; Insel, Beverly; Kleinman, Marjorie
2007-01-01
Despite substantial research on media influences and the development of media guidelines on suicide reporting, research on the specifics of media stories that facilitate suicide contagion has been limited. The goal of the present study was to develop a content analytic strategy to code features in media suicide reports presumed to be influential in suicide contagion and determine the interrater reliability of the qualitative characteristics abstracted from newspaper stories. A random subset of 151 articles from a database of 1851 newspaper suicide stories published during 1988 through 1996, which were collected as part of a national study in the United States to identify factors associated with the initiation of youth suicide clusters, were evaluated. Using a well-defined content-analysis procedure, the agreement between raters in scoring key concepts of suicide reports from the headline, the pictorial presentation, and the text were evaluated. The results show that while the majority of variables in the content analysis were very reliable, assessed using the kappa statistic, and obtained excellent percentages of agreement, the reliability of complicated constructs, such as sensationalizing, glorifying, or romanticizing the suicide, was comparatively low. The data emphasize that before effective guidelines and responsible suicide reporting can ensue, further explication of suicide story constructs is necessary to ensure the implementation and compliance of responsible reporting on behalf of the media.
Conversion of a traditional image archive into an image resource on compact disc.
Andrew, S M; Benbow, E W
1997-01-01
The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Becoming customer-driven: one health system's story.
Bagnell, A
1998-01-01
Market research was done by Crozer-Keystone Health System to better understand the new health care consumer. The information will assist in developing, promoting, and delivering products and services of maximum value to current and prospective consumers. The system is responding by bundling and delivering products and services around consumer-based dimensions, developing new and better ways to improve customer convenience, access, and service. Operationalizing these initiatives for change involves building an information infrastructure of extensive content and customer databases, using new technologies to customize communications and ultimately service components.
Content and Accessibility of Shoulder and Elbow Fellowship Web Sites in the United States.
Young, Bradley L; Oladeji, Lasun O; Cichos, Kyle; Ponce, Brent
2016-01-01
Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants' decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs.
Seniors' Online Communities: A Quantitative Content Analysis
ERIC Educational Resources Information Center
Nimrod, Galit
2010-01-01
Purpose: To examine the contents and characteristics of seniors' online communities and to explore their potential benefits to older adults. Design and Methods: Quantitative content analysis of a full year's data from 14 leading online communities using a novel computerized system. The overall database included 686,283 messages. Results: There was…
Changing the Latitudes and Attitudes about Content Analysis Research
ERIC Educational Resources Information Center
Brank, Eve M.; Fox, Kathleen A.; Youstin, Tasha J.; Boeppler, Lee C.
2008-01-01
The current research employs the use of content analysis to teach research methods concepts among students enrolled in an upper division research methods course. Students coded and analyzed Jimmy Buffett song lyrics rather than using a downloadable database or collecting survey data. Students' knowledge of content analysis concepts increased after…
Chado Controller: advanced annotation management with a community annotation system
Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie
2012-01-01
Summary: We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. Availability: The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form Contact: valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22285827
Software Sharing Enables Smarter Content Management
NASA Technical Reports Server (NTRS)
2007-01-01
In 2004, NASA established a technology partnership with Xerox Corporation to develop high-tech knowledge management systems while providing new tools and applications that support the Vision for Space Exploration. In return, NASA provides research and development assistance to Xerox to progress its product line. The first result of the technology partnership was a new system called the NX Knowledge Network (based on Xerox DocuShare CPX). Created specifically for NASA's purposes, this system combines Netmark-practical database content management software created by the Intelligent Systems Division of NASA's Ames Research Center-with complementary software from Xerox's global research centers and DocuShare. NX Knowledge Network was tested at the NASA Astrobiology Institute, and is widely used for document management at Ames, Langley Research Center, within the Mission Operations Directorate at Johnson Space Center, and at the Jet Propulsion Laboratory, for mission-related tasks.
Conjunctive programming: An interactive approach to software system synthesis
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.
Life-Cycle Cost Database. Volume II. Appendices E, F, and G. Sample Data Development.
1983-01-01
Bendix Field Engineering Corporation Columbia, Maryland 21045 5 CONTENTS Page GENERAL 8 Introduction Objective Engineering Survey SYSTEM DESCRIPTION...in a typical administrative type building over a 25-year period. 1.3 ENGINEERING SURVEY An on-site survey was conducted by Bendix Field Engineering...Damp Mop and Buff Buff Routine Vacuum Strip and Refinish Heavy Duty Vacuum Machine, Scrub and Surface Shampoo Pick Up Extraction Clean Repair Location
Content Based Image Matching for Planetary Science
NASA Astrophysics Data System (ADS)
Deans, M. C.; Meyer, C.
2006-12-01
Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct matches can also be confirmed by verifying MI images taken in the same z-stack, or MOC image tiles taken from the same image strip. False negatives are difficult to quantify as it would mean finding matches in the database of thousands of images that the algorithm did not detect.
Neurosurgery Residency Websites: A Critical Evaluation.
Skovrlj, Branko; Silvestre, Jason; Ibeh, Chinwe; Abbatematteo, Joseph M; Mocco, J
2015-09-01
To evaluate the accessibility of educational and recruitment content of Neurosurgery Residency Websites (NRWs). Program lists from the Fellowship and Residency Electronic Interactive Database (FREIDA), Electronic Residency Application Service (ERAS), and the American Association of Neurological Surgeons (AANS) were accessed for the 2015 Match. These databases were assessed for accessibility of information and responsive program contacts. Presence of online recruitment and education variables was assessed, and correlations between program characteristics and website comprehensiveness were made. All 103 neurosurgery residency programs had an NRW. The AANS database provided the most number of viable website links with 65 (63%). No links existed for 5 (5%) programs. A minority of programs contacts responded via e-mail (46%). A minority of recruitment (46%) and educational (49%) variables were available on the NRWs. Larger programs, as defined by the number of yearly residency spots and clinical faculty, maintained greater online content than smaller programs. Similar trends were seen with programs affiliated with a ranked medical school and hospital. Multiple prior studies have demonstrated that medical students applying to neurosurgery rely heavily on residency program websites. As such, the paucity of content on NRWs allows for future opportunity to optimize online resources for neurosurgery training. Making sure that individual programs provide relevant content, make the content easier to find and adhere to established web design principles could increase the usability of NRWs. Copyright © 2015 Elsevier Inc. All rights reserved.
Adapting a large database of point of care summarized guidelines: a process description.
Delvaux, Nicolas; Van de Velde, Stijn; Aertgeerts, Bert; Goossens, Martine; Fauquert, Benjamin; Kunnamo, Ilka; Van Royen, Paul
2017-02-01
Questions posed at the point of care (POC) can be answered using POC summarized guidelines. To implement a national POC information resource, we subscribed to a large database of POC summarized guidelines to complement locally available guidelines. Our challenge was in developing a sustainable strategy for adapting almost 1000 summarized guidelines. The aim of this paper was to describe our process for adapting a database of POC summarized guidelines. An adaptation process based on the ADAPTE framework was tailored to be used by a heterogeneous group of participants. Guidelines were assessed on content and on applicability to the Belgian context. To improve efficiency, we chose to first aim our efforts towards those guidelines most important to primary care doctors. Over a period of 3 years, we screened about 80% of 1000 international summarized guidelines. For those guidelines identified as most important for primary care doctors, we noted that in about half of the cases, remarks were made concerning content. On the other hand, at least two-thirds of all screened guidelines required no changes when evaluating their local usability. Adapting a large body of POC summarized guidelines using a formal adaptation process is possible, even when faced with limited resources. This can be done by creating an efficient and collaborative effort and ensuring user-friendly procedures. Our experiences show that even though in most cases guidelines can be adopted without adaptations, careful review of guidelines developed in a different context remains necessary. Streamlining international efforts in adapting international POC information resources and adopting similar adaptation processes may lessen duplication efforts and prove more cost-effective. © 2015 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.
Healthier vending machines in workplaces: both possible and effective.
Gorton, Delvina; Carter, Julie; Cvjetan, Branko; Ni Mhurchu, Cliona
2010-03-19
To develop healthier vending guidelines and assess their effect on the nutrient content and sales of snack products sold through hospital vending machines, and on staff satisfaction. Nutrition guidelines for healthier vending machine products were developed and implemented in 14 snack vending machines at two hospital sites in Auckland, New Zealand. The guidelines comprised threshold criteria for energy, saturated fat, sugar, and sodium content of vended foods. Sales data were collected prior to introduction of the guidelines (March-May 2007), and again post-introduction (March-May 2008). A food composition database was used to assess impact of the intervention on nutrient content of purchases. A staff survey was also conducted pre- and post-intervention to assess acceptability. Pre-intervention, 16% of staff used vending machines once a week or more, with little change post-intervention (15%). The guidelines resulted in a substantial reduction in the amount of energy (-24%), total fat (-32%), saturated fat (-41%), and total sugars (-30%) per 100 g product sold. Sales volumes were not affected, and the proportion of staff satisfied with vending machine products increased. Implementation of nutrition guidelines in hospital vending machines led to substantial improvements in nutrient content of vending products sold. Wider implementation of these guidelines is recommended.
WWW database of optical constants for astronomy
NASA Astrophysics Data System (ADS)
Henning, Th.; Il'In, V. B.; Krivova, N. A.; Michel, B.; Voshchinnikov, N. V.
1999-04-01
The database we announce contains references to the papers, data files and links to the Internet resources related to measurements and calculations of the optical constants of the materials of astronomical interest: different silicates, ices, oxides, sulfides, carbides, carbonaceous species from amorphous carbon to graphite and diamonds, etc. We describe the general structure and content of the database which has now free access via Internet: http://www.astro.spbu.ru/JPDOC/entry.html\\ or \\ http:// www. astro.uni-jena.de/Users/database/entry.html
Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn
2016-04-01
Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
Named Entity Recognition in a Hungarian NL Based QA System
NASA Astrophysics Data System (ADS)
Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor
In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.
Fish and chips: Various methodologies demonstrate utility of a 16,006-gene salmonid microarray
von Schalburg, Kristian R; Rise, Matthew L; Cooper, Glenn A; Brown, Gordon D; Gibbs, A Ross; Nelson, Colleen C; Davidson, William S; Koop, Ben F
2005-01-01
Background We have developed and fabricated a salmonid microarray containing cDNAs representing 16,006 genes. The genes spotted on the array have been stringently selected from Atlantic salmon and rainbow trout expressed sequence tag (EST) databases. The EST databases presently contain over 300,000 sequences from over 175 salmonid cDNA libraries derived from a wide variety of tissues and different developmental stages. In order to evaluate the utility of the microarray, a number of hybridization techniques and screening methods have been developed and tested. Results We have analyzed and evaluated the utility of a microarray containing 16,006 (16K) salmonid cDNAs in a variety of potential experimental settings. We quantified the amount of transcriptome binding that occurred in cross-species, organ complexity and intraspecific variation hybridization studies. We also developed a methodology to rapidly identify and confirm the contents of a bacterial artificial chromosome (BAC) library containing Atlantic salmon genomic DNA. Conclusion We validate and demonstrate the usefulness of the 16K microarray over a wide range of teleosts, even for transcriptome targets from species distantly related to salmonids. We show the potential of the use of the microarray in a variety of experimental settings through hybridization studies that examine the binding of targets derived from different organs and tissues. Intraspecific variation in transcriptome expression is evaluated and discussed. Finally, BAC hybridizations are demonstrated as a rapid and accurate means to identify gene content. PMID:16164747
Lunar e-Library: Putting Space History to Work
NASA Technical Reports Server (NTRS)
McMahan, Tracy A.; Shea, Charlotte A.; Finckenor, Miria
2006-01-01
As NASA plans and implements the Vision for Space Exploration, managers, engineers, and scientists need historically important information that is readily available and easily accessed. The Lunar e-Library - a searchable collection of 1100 electronic (.PDF) documents - makes it easy to find critical technical data and lessons learned and put space history knowledge in action. The Lunar e-Library, a DVD knowledge database, was developed by NASA to shorten research time and put knowledge at users' fingertips. Funded by NASA's Space Environments and Effects (SEE) Program headquartered at Marshall Space Flight Center (MSFC) and the MSFC Materials and Processes Laboratory, the goal of the Lunar e- Library effort was to identify key lessons learned from Apollo and other lunar programs and missions and to provide technical information from those programs in an easy-to-use format. The SEE Program began distributing the Lunar e-Library knowledge database in 2006. This paper describes the Lunar e-Library development process (including a description of the databases and resources used to acquire the documents) and the contents of the DVD product, demonstrates its usefulness with focused searches, and provides information on how to obtain this free resource.
2016 update of the PRIDE database and its related tools
Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning
2016-01-01
The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722
MIRNA-DISTILLER: A Stand-Alone Application to Compile microRNA Data from Databases.
Rieger, Jessica K; Bodan, Denis A; Zanger, Ulrich M
2011-01-01
MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3'-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available databases have been developed that predict miRNA binding sites based on different algorithms. The parallel use of different databases is currently indispensable, but highly uncomfortable and time consuming, especially when working with numerous genes of interest. We have therefore developed a new stand-alone program, termed MIRNA-DISTILLER, which allows to compile miRNA data for given target genes from public databases. Currently implemented are TargetScan, microCosm, and miRDB, which may be queried independently, pairwise, or together to calculate the respective intersections. Data are stored locally for application of further analysis tools including freely definable biological parameter filters, customized output-lists for both miRNAs and target genes, and various graphical facilities. The software, a data example file and a tutorial are freely available at http://www.ikp-stuttgart.de/content/language1/html/10415.asp.
MIRNA-DISTILLER: A Stand-Alone Application to Compile microRNA Data from Databases
Rieger, Jessica K.; Bodan, Denis A.; Zanger, Ulrich M.
2011-01-01
MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3′-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available databases have been developed that predict miRNA binding sites based on different algorithms. The parallel use of different databases is currently indispensable, but highly uncomfortable and time consuming, especially when working with numerous genes of interest. We have therefore developed a new stand-alone program, termed MIRNA-DISTILLER, which allows to compile miRNA data for given target genes from public databases. Currently implemented are TargetScan, microCosm, and miRDB, which may be queried independently, pairwise, or together to calculate the respective intersections. Data are stored locally for application of further analysis tools including freely definable biological parameter filters, customized output-lists for both miRNAs and target genes, and various graphical facilities. The software, a data example file and a tutorial are freely available at http://www.ikp-stuttgart.de/content/language1/html/10415.asp PMID:22303335
Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S
2016-12-01
Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Library of molecular associations: curating the complex molecular basis of liver diseases.
Buchkremer, Stefan; Hendel, Jasmin; Krupp, Markus; Weinmann, Arndt; Schlamp, Kai; Maass, Thorsten; Staib, Frank; Galle, Peter R; Teufel, Andreas
2010-03-20
Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.
[Application of atomic absorption spectrometry in the engine knock detection].
Chen, Li-Dan
2013-02-01
Because existing human experience diagnosis method and apparatus for auxiliary diagnosis method are difficult to diagnose quickly engine knock. Atomic absorption spectrometry was used to detect the automobile engine knock in in innovative way. After having determined Fe, Al, Cu, Cr and Pb content in the 35 groups of Audi A6 engine oil whose travel course is 2 000 -70 000 kilometers and whose sampling interval is 2 000 kilometers by atomic absorption spectrometry, the database of primary metal content in the same automobile engine at different mileage was established. The research shows that the main metal content fluctuates within a certain range. In practical engineering applications, after the determination of engine oil main metal content and comparison with its database value, it can not only help to diagnose the type and location of engine knock without the disintegration and reduce vehicle maintenance costs and improve the accuracy of engine knock fault diagnosis.
Whetzel, Patricia L.; Grethe, Jeffrey S.; Banks, Davis E.; Martone, Maryann E.
2015-01-01
The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the “hidden” web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a “search engine for data”, searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases that involve searching for research resources. PMID:26393351
DrugBank 5.0: a major update to the DrugBank database for 2018.
Wishart, David S; Feunang, Yannick D; Guo, An C; Lo, Elvis J; Marcu, Ana; Grant, Jason R; Sajed, Tanvir; Johnson, Daniel; Li, Carin; Sayeeda, Zinat; Assempour, Nazanin; Iynkkaran, Ithayavani; Liu, Yifeng; Maciejewski, Adam; Gale, Nicola; Wilson, Alex; Chin, Lucy; Cummings, Ryan; Le, Diana; Pon, Allison; Knox, Craig; Wilson, Michael
2018-01-04
DrugBank (www.drugbank.ca) is a web-enabled database containing comprehensive molecular information about drugs, their mechanisms, their interactions and their targets. First described in 2006, DrugBank has continued to evolve over the past 12 years in response to marked improvements to web standards and changing needs for drug research and development. This year's update, DrugBank 5.0, represents the most significant upgrade to the database in more than 10 years. In many cases, existing data content has grown by 100% or more over the last update. For instance, the total number of investigational drugs in the database has grown by almost 300%, the number of drug-drug interactions has grown by nearly 600% and the number of SNP-associated drug effects has grown more than 3000%. Significant improvements have been made to the quantity, quality and consistency of drug indications, drug binding data as well as drug-drug and drug-food interactions. A great deal of brand new data have also been added to DrugBank 5.0. This includes information on the influence of hundreds of drugs on metabolite levels (pharmacometabolomics), gene expression levels (pharmacotranscriptomics) and protein expression levels (pharmacoprotoemics). New data have also been added on the status of hundreds of new drug clinical trials and existing drug repurposing trials. Many other important improvements in the content, interface and performance of the DrugBank website have been made and these should greatly enhance its ease of use, utility and potential applications in many areas of pharmacological research, pharmaceutical science and drug education. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Hugelius, G.; Tarnocai, C.; Broll, G.; Canadell, J. G.; Kuhry, P.; Swanson, D. K.
2012-08-01
High latitude terrestrial ecosystems are key components in the global carbon (C) cycle. Estimates of global soil organic carbon (SOC), however, do not include updated estimates of SOC storage in permafrost-affected soils or representation of the unique pedogenic processes that affect these soils. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify the SOC stocks in the circumpolar permafrost region (18.7 × 106 km2). The NCSCD is a polygon-based digital database compiled from harmonized regional soil classification maps in which data on soil order coverage has been linked to pedon data (n = 1647) from the northern permafrost regions to calculate SOC content and mass. In addition, new gridded datasets at different spatial resolutions have been generated to facilitate research applications using the NCSCD (standard raster formats for use in Geographic Information Systems and Network Common Data Form files common for applications in numerical models). This paper describes the compilation of the NCSCD spatial framework, the soil sampling and soil analyses procedures used to derive SOC content in pedons from North America and Eurasia and the formatting of the digital files that are available online. The potential applications and limitations of the NCSCD in spatial analyses are also discussed. The database has the doi:10.5879/ecds/00000001. An open access data-portal with all the described GIS-datasets is available online at: http://dev1.geo.su.se/bbcc/dev/ncscd/.
NASA Astrophysics Data System (ADS)
Hugelius, G.; Tarnocai, C.; Broll, G.; Canadell, J. G.; Kuhry, P.; Swanson, D. K.
2013-01-01
High-latitude terrestrial ecosystems are key components in the global carbon (C) cycle. Estimates of global soil organic carbon (SOC), however, do not include updated estimates of SOC storage in permafrost-affected soils or representation of the unique pedogenic processes that affect these soils. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify the SOC stocks in the circumpolar permafrost region (18.7 × 106 km2). The NCSCD is a polygon-based digital database compiled from harmonized regional soil classification maps in which data on soil order coverage have been linked to pedon data (n = 1778) from the northern permafrost regions to calculate SOC content and mass. In addition, new gridded datasets at different spatial resolutions have been generated to facilitate research applications using the NCSCD (standard raster formats for use in geographic information systems and Network Common Data Form files common for applications in numerical models). This paper describes the compilation of the NCSCD spatial framework, the soil sampling and soil analytical procedures used to derive SOC content in pedons from North America and Eurasia and the formatting of the digital files that are available online. The potential applications and limitations of the NCSCD in spatial analyses are also discussed. The database has the doi:10.5879/ecds/00000001. An open access data portal with all the described GIS-datasets is available online at: http://www.bbcc.su.se/data/ncscd/.
Setchell, Kenneth D R; Cole, Sidney J
2003-07-02
The reliability of databases on the isoflavone composition of foods designed to estimate dietary intakes is contingent on the assumption that soy foods are consistent in their isoflavone content. To validate this, total and individual isoflavone compositions were determined by HPLC for two different soy protein isolates used in the commercial manufacture of soy foods over a 3-year period (n = 30/isolate) and 85 samples of 40 different brands of soy milks. Total isoflavone concentrations differed markedly between the soy protein isolates, varying by 200-300% over 3 years, whereas the protein content varied by only 3%. Total isoflavone content varied by up to 5-fold among different commercial soy milks and was not consistent between repeat purchases. Whole soybean milks had significantly higher isoflavone levels than those made from soy protein isolates (mean +/- SD, 63.6 +/- 21.9 mg/L, n = 43, vs 30.2 +/- 5.8 mg/L, n = 38, respectively, p < 0.0001), although some isolated soy protein-based milks were similar in content to "whole bean" varieties. The ratio of genistein to daidzein isoflavone forms was higher in isolated soy protein-based versus "whole bean" soy milks (2.72 +/- 0.24 vs 1.62 +/- 0.47, respectively, p < 0.0001), and the greatest variability in isoflavone content was observed among brands of whole bean soy milks. These studies illustrate large variability in the isoflavone content of isolated soy proteins used in food manufacture and in commercial soy milks and reinforce the need to accurately determine the isoflavone content of foods used in dietary intervention studies while exposing the limitations of food databases for estimating daily isoflavone intakes.
LCS Content Document Application
NASA Technical Reports Server (NTRS)
Hochstadt, Jake
2011-01-01
My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.
Analysis of the Genome and Chromium Metabolism-Related Genes of Serratia sp. S2.
Dong, Lanlan; Zhou, Simin; He, Yuan; Jia, Yan; Bai, Qunhua; Deng, Peng; Gao, Jieying; Li, Yingli; Xiao, Hong
2018-05-01
This study is to investigate the genome sequence of Serratia sp. S2. The genomic DNA of Serratia sp. S2 was extracted and the sequencing library was constructed. The sequencing was carried out by Illumina 2000 and complete genomic sequences were obtained. Gene function annotation and bioinformatics analysis were performed by comparing with the known databases. The genome size of Serratia sp. S2 was 5,604,115 bp and the G+C content was 57.61%. There were 5373 protein coding genes, and 3732, 3614, and 3942 genes were respectively annotated into the GO, KEGG, and COG databases. There were 12 genes related to chromium metabolism in the Serratia sp. S2 genome. The whole genome sequence of Serratia sp. S2 is submitted to the GenBank database with gene accession number of LNRP00000000. Our findings may provide theoretical basis for the subsequent development of new biotechnology to repair environmental chromium pollution.
Metadata tables to enable dynamic data modeling and web interface design: the SEER example.
Weiner, Mark; Sherr, Micah; Cohen, Abigail
2002-04-01
A wealth of information addressing health status, outcomes and resource utilization is compiled and made available by various government agencies. While exploration of the data is possible using existing tools, in general, would-be users of the resources must acquire CD-ROMs or download data from the web, and upload the data into their own database. Where web interfaces exist, they are highly structured, limiting the kinds of queries that can be executed. This work develops a web-based database interface engine whose content and structure is generated through interaction with a metadata table. The result is a dynamically generated web interface that can easily accommodate changes in the underlying data model by altering the metadata table, rather than requiring changes to the interface code. This paper discusses the background and implementation of the metadata table and web-based front end and provides examples of its use with the NCI's Surveillance, Epidemiology and End-Results (SEER) database.
Egorova, K.S.; Kondakova, A.N.; Toukach, Ph.V.
2015-01-01
Carbohydrates are biological blocks participating in diverse and crucial processes both at cellular and organism levels. They protect individual cells, establish intracellular interactions, take part in the immune reaction and participate in many other processes. Glycosylation is considered as one of the most important modifications of proteins and other biologically active molecules. Still, the data on the enzymatic machinery involved in the carbohydrate synthesis and processing are scattered, and the advance on its study is hindered by the vast bulk of accumulated genetic information not supported by any experimental evidences for functions of proteins that are encoded by these genes. In this article, we present novel instruments for statistical analysis of glycomes in taxa. These tools may be helpful for investigating carbohydrate-related enzymatic activities in various groups of organisms and for comparison of their carbohydrate content. The instruments are developed on the Carbohydrate Structure Database (CSDB) platform and are available freely on the CSDB web-site at http://csdb.glycoscience.ru. Database URL: http://csdb.glycoscience.ru PMID:26337239
Panagos, Panos; Ballabio, Cristiano; Yigini, Yusuf; Dunbar, Martha B
2013-01-01
Under the European Union Thematic Strategy for Soil Protection, the European Commission Directorate-General for the Environment and the European Environmental Agency (EEA) identified a decline in soil organic carbon and soil losses by erosion as priorities for the collection of policy relevant soil data at European scale. Moreover, the estimation of soil organic carbon content is of crucial importance for soil protection and for climate change mitigation strategies. Soil organic carbon is one of the attributes of the recently developed LUCAS soil database. The request for data on soil organic carbon and other soil attributes arose from an on-going debate about efforts to establish harmonized datasets for all EU countries with data on soil threats in order to support modeling activities and display variations in these soil conditions across Europe. In 2009, the European Commission's Joint Research Centre conducted the LUCAS soil survey, sampling ca. 20,000 points across 23 EU member states. This article describes the results obtained from analyzing the soil organic carbon data in the LUCAS soil database. The collected data were compared with the modeled European topsoil organic carbon content data developed at the JRC. The best fitted comparison was performed at NUTS2 level and showed underestimation of modeled data in southern Europe and overestimation in the new central eastern member states. There is a good correlation in certain regions for countries such as the United Kingdom, Slovenia, Italy, Ireland, and France. Here we assess the feasibility of producing comparable estimates of the soil organic carbon content at NUTS2 regional level for the European Union (EU27) and draw a comparison with existing modeled data. In addition to the data analysis, we suggest how the modeled data can be improved in future updates with better calibration of the model. Copyright © 2012 Elsevier B.V. All rights reserved.
Rojas-Macias, Miguel A; Ståhle, Jonas; Lütteke, Thomas; Widmalm, Göran
2015-03-01
Escherichia coli O-antigen database (ECODAB) is a web-based application to support the collection of E. coli O-antigen structures, polymerase and flippase amino acid sequences, NMR chemical shift data of O-antigens as well as information on glycosyltransferases (GTs) involved in the assembly of O-antigen polysaccharides. The database content has been compiled from scientific literature. Furthermore, the system has evolved from being a repository to one that can be used for generating novel data on its own. GT specificity is suggested through sequence comparison with GTs whose function is known. The migration of ECODAB to a relational database has allowed the automation of all processes to update, retrieve and present information, thereby, endowing the system with greater flexibility and improved overall performance. ECODAB is freely available at http://www.casper.organ.su.se/ECODAB/. Currently, data on 169 E. coli unique O-antigen entries and 338 GTs is covered. Moreover, the scope of the database has been extended so that polysaccharide structure and related information from other bacteria subsequently can be added, for example, from Streptococcus pneumoniae. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bagherian, Hossein; Farahbakhsh, Mohammad; Rabiei, Reza; Moghaddasi, Hamid; Asadi, Farkhondeh
2017-12-01
To obtain necessary information for managing communicable diseases, different countries have developed national communicable diseases surveillance systems (NCDSS). Exploiting the lesson learned from the leading countries in development of surveillance systems provides the foundation for developing these systems in other countries. In this study, the information and organizational structure of NCDSS in developed countries were reviewed. The study reviewed publications found on the organizational structure, content and data flow of NCDSS in the United States of America (USA), Australia and Germany that were published in English between 2000 and 2016. The publications were identified by searching the CINAHL, Science Direct, ProQuest, PubMed, Google Scholar databases and the related databases in selected countries. Thirty-four studies were investigated. All of the reviewed countries have implemented the NCDSS. In majority of countries the department of health (DoH) is responsible for managing this system. The reviewed countries have created a minimum data set for reporting communicable diseases data and information. For developing NCDSS, establishing coordinator centers, setting the effective policies and procedures, providing appropriate communication infrastructures for data exchange and defining a communicable diseases minimum data set are essential.
Yayac, Michael; Javandal, Mitra; Mulcahey, Mary K
2017-01-01
A substantial number of orthopaedic surgeons apply for sports medicine fellowships after residency completion. The Internet is one of the most important resources applicants use to obtain information about fellowship programs, with the program website serving as one of the most influential sources. The American Orthopaedic Society for Sports Medicine (AOSSM), San Francisco Match (SFM), and Arthroscopy Association of North America (AANA) maintain databases of orthopaedic sports medicine fellowship programs. A 2013 study evaluated the content and accessibility of the websites for accredited orthopaedic sports medicine fellowships. To reassess these websites based on the same parameters and compare the results with those of the study published in 2013 to determine whether any improvement has been made in fellowship website content or accessibility. Cross-sectional study. We reviewed all existing websites for the 95 accredited orthopaedic sports medicine fellowships included in the AOSSM, SFM, and AANA databases. Accessibility of the websites was determined by performing a Google search for each program. A total of 89 sports fellowship websites were evaluated for overall content. Websites for the remaining 6 programs could not be identified, so they were not included in content assessment. Of the 95 accredited sports medicine fellowships, 49 (52%) provided links in the AOSSM database, 89 (94%) in the SFM database, and 24 (25%) in the AANA database. Of the 89 websites, 89 (100%) provided a description of the program, 62 (70%) provided selection process information, and 40 (45%) provided a link to the SFM website. Two searches through Google were able to identify links to 88% and 92% of all accredited programs. The majority of accredited orthopaedic sports medicine fellowship programs fail to utilize the Internet to its full potential as a resource to provide applicants with detailed information about the program, which could help residents in the selection and ranking process. Orthopaedic sports medicine fellowship websites that are easily accessible through the AOSSM, SFM, AANA, or Google and that provide all relevant information for applicants would simplify the process of deciding where to apply, interview, and ultimately how to rank orthopaedic sports medicine fellowship programs for the Orthopaedic Sports Medicine Fellowship Match.
Southan, Christopher; Várkonyi, Péter; Muresan, Sorel
2009-07-06
Since 2004 public cheminformatic databases and their collective functionality for exploring relationships between compounds, protein sequences, literature and assay data have advanced dramatically. In parallel, commercial sources that extract and curate such relationships from journals and patents have also been expanding. This work updates a previous comparative study of databases chosen because of their bioactive content, availability of downloads and facility to select informative subsets. Where they could be calculated, extracted compounds-per-journal article were in the range of 12 to 19 but compound-per-protein counts increased with document numbers. Chemical structure filtration to facilitate standardised comparisons typically reduced source counts by between 5% and 30%. The pair-wise overlaps between 23 databases and subsets were determined, as well as changes between 2006 and 2008. While all compound sets have increased, PubChem has doubled to 14.2 million. The 2008 comparison matrix shows not only overlap but also unique content across all sources. Many of the detailed differences could be attributed to individual strategies for data selection and extraction. While there was a big increase in patent-derived structures entering PubChem since 2006, GVKBIO contains over 0.8 million unique structures from this source. Venn diagrams showed extensive overlap between compounds extracted by independent expert curation from journals by GVKBIO, WOMBAT (both commercial) and BindingDB (public) but each included unique content. In contrast, the approved drug collections from GVKBIO, MDDR (commercial) and DrugBank (public) showed surprisingly low overlap. Aggregating all commercial sources established that while 1 million compounds overlapped with PubChem 1.2 million did not. On the basis of chemical structure content per se public sources have covered an increasing proportion of commercial databases over the last two years. However, commercial products included in this study provide links between compounds and information from patents and journals at a larger scale than current public efforts. They also continue to capture a significant proportion of unique content. Our results thus demonstrate not only an encouraging overall expansion of data-supported bioactive chemical space but also that both commercial and public sources are complementary for its exploration.
Code of Federal Regulations, 2010 CFR
2010-01-01
... database resulting from the transformation of the ENC by ECDIS for appropriate use, updates to the ENC by... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as to content, structure, and format, issued for use with ECDIS on the authority of government...
Reach for Reference. Science Online
ERIC Educational Resources Information Center
Safford, Barbara Ripp
2004-01-01
This brief article describes the database, Science Online, from Facts on File. Science is defined broadly in this database to include archeology, computer technology, medicine, inventions, and mathematics, as well as biology, chemistry, earth sciences, and astronomy. Content also is divided into format categories for browsing purposes:…
ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisien, Lia
2016-01-31
This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.
Off-Site Indexing: A Cottage Industry.
ERIC Educational Resources Information Center
Fay, Catherine H.
1984-01-01
Briefly describes use of off-site staffing--indexers, abstractors, editors--in the production of two major databases: Management Contents and The Computer Data Base. Discussion covers the production sequence; database administrator; off-site indexer; savings (office space, furniture and equipment costs, salaries, and overhead); and problems…
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
The European Radiobiology Archives (ERA)--content, structure and use illustrated by an example.
Gerber, G B; Wick, R R; Kellerer, A M; Hopewell, J W; Di Majo, V; Dudoignon, N; Gössner, W; Stather, J
2006-01-01
The European Radiobiology Archives (ERA), supported by the European Commission and the European Late Effect Project Group (EULEP), together with the US National Radiobiology Archives (NRA) and the Japanese Radiobiology Archives (JRA) have collected all information still available on long-term animal experiments, including some selected human studies. The archives consist of a database in Microsoft Access, a website, databases of references and information on the use of the database. At present, the archives contain a description of the exposure conditions, animal strains, etc. from approximately 350,000 individuals; data on survival and pathology are available from approximately 200,000 individuals. Care has been taken to render pathological diagnoses compatible among different studies and to allow the lumping of pathological diagnoses into more general classes. 'Forms' in Access with an underlying computer code facilitate the use of the database. This paper describes the structure and content of the archives and illustrates an example for a possible analysis of such data.
Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations
NASA Astrophysics Data System (ADS)
Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth
2015-04-01
The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as that feature. One especially significant audit involved re-evaluating the categories of evidence used to include a volcano in the Holocene list, and reviewing in detail the entries in low-certainty categories. Concurrently, we developed a new data entry system that may in the future allow trusted users outside of Smithsonian to input data into VOTW. A redesigned website now provides new search tools and data download options. We are collaborating with organizations that manage volcano and eruption databases, physical sample databases, and geochemical databases to allow real-time connections and complex queries. VOTW serves the volcanological community by providing a clear and consistent core database of distinctly identified volcanoes and eruptions to advance goals in research, civil defense, and public outreach.
CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.
Hallin, Peter F; Ussery, David W
2004-12-12
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.
Frank, M S; Dreyer, K
2001-06-01
We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.
EXFOR – a global experimental nuclear reaction data repository: Status and new developments
Semkova, Valentina; Otuka, Naohiko; Mikhailiukova, Marina; ...
2017-09-13
Members of the International Network of Nuclear Reaction Data Centres (NRDC) have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Here, recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.
Carbon - Bulk Density Relationships for Highly Weathered Soils of the Americas
NASA Astrophysics Data System (ADS)
Nave, L. E.
2014-12-01
Soils are dynamic natural bodies composed of mineral and organic materials. As a result of this mixed composition, essential properties of soils such as their apparent density, organic and mineral contents are typically correlated. Negative relationships between bulk density (Db) and organic matter concentration provide well-known examples across a broad range of soils, and such quantitative relationships among soil properties are useful for a variety of applications. First, gap-filling or data interpolation often are necessary to develop large soil carbon (C) datasets; furthermore, limitations of access to analytical instruments may preclude C determinations for every soil sample. In such cases, equations to derive soil C concentrations from basic measures of soil mass, volume, and density offer significant potential for purposes of soil C stock estimation. To facilitate estimation of soil C stocks on highly weathered soils of the Americas, I used observations from the International Soil Carbon Network (ISCN) database to develop carbon - bulk density prediction equations for Oxisols and Ultisols. Within a small sample set of georeferenced Oxisols (n=89), 29% of the variation in A horizon C concentrations can be predicted from Db. Including the A-horizon sand content improves predictive capacity to 35%. B horizon C concentrations (n=285) were best predicted by Db and clay content, but were more variable than A-horizons (only 10% of variation explained by linear regression). Among Ultisols, a larger sample set allowed investigation of specific horizons of interest. For example, C concentrations of plowed A (Ap) horizons are predictable based on Db, sand and silt contents (n=804, r2=0.38); gleyed argillic (Btg) horizon concentrations are predictable from Db, sand and clay contents (n=190, r2=0.23). Because soil C stock estimates are more sensitive to variation in soil mass and volume determinations than to variation in C concentration, prediction equations such as these may be used on carefully collected samples to constrain soil C stocks. The geo-referenced ISCN database allows users the opportunity to derive similar predictive relationships among measured soil parameters; continued input of new datasets from highly weathered soils of the Americas will improve the precision of these prediction equations.
Coherent Microwave Scattering Model of Marsh Grass
NASA Astrophysics Data System (ADS)
Duan, Xueyang; Jones, Cathleen E.
2017-12-01
In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.
A comprehensive strategy for designing a Web-based medical curriculum.
Zucker, J.; Chase, H.; Molholt, P.; Bean, C.; Kahn, R. M.
1996-01-01
In preparing for a full featured online curriculum, it is necessary to develop scaleable strategies for software design that will support the pedagogical goals of the curriculum and which will address the issues of acquisition and updating of materials, of robust content-based linking, and of integration of the online materials into other methods of learning. A complete online curriculum, as distinct from an individual computerized module, must provide dynamic updating of both content and structure and an easy pathway from the professor's notes to the finished online product. At the College of Physicians and Surgeons, we are developing such strategies including a scripted text conversion process that uses the Hypertext Markup Language (HTML) as structural markup rather than as display markup, automated linking by the use of relational databases and the Unified Medical Language System (UMLS), integration of text, images, and multimedia along with interface designs which promote multiple contexts and collaborative study. PMID:8947624
Information Technology and the Evolution of the Library
2009-03-01
Resource Commons/ Repository/ Federated Search ILS (GLADIS/Pathfinder - Millenium)/ Catalog/ Circulation/ Acquisitions/ Digital Object Content...content management services to help centralize and distribute digi- tal content from across the institution, software to allow for seamless federated ... search - ing across multiple databases, and imaging software to allow for daily reimaging of ter- minals to reduce security concerns that otherwise
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Integration of Information Retrieval and Database Management Systems.
ERIC Educational Resources Information Center
Deogun, Jitender S.; Raghavan, Vijay V.
1988-01-01
Discusses the motivation for integrating information retrieval and database management systems, and proposes a probabilistic retrieval model in which records in a file may be composed of attributes (formatted data items) and descriptors (content indicators). The details and resolutions of difficulties involved in integrating such systems are…
SWS: accessing SRS sites contents through Web Services.
Romano, Paolo; Marra, Domenico
2008-03-26
Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.
Quantifying VOC emissions from polymers: A case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze, J.K.; Qasem, J.S.; Snoddy, R.
1996-12-31
Evaluating residual volatile organic compound emissions emanating from low-density polyethylene can pose significant challenges. These challenges include quantifying emissions from: (a) multiple process lines with different operating conditions; (b) several different comonomers; (c) variations of comonomer content in each grade; and (d) over 120 grades of LDPE. This presentation is a Case Study outlining a project to develop grade-specific emission data for low-density polyethylene pellets. This study included extensive laboratory analyses and required the development of a relational database to compile analytical results, calculate the mean concentration and standard deviation, and generate emissions reports.
A radiology department intranet: development and applications.
Willing, S J; Berland, L L
1999-01-01
An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.
Magnetic Fields for All: The GPIPS Community Web-Access Portal
NASA Astrophysics Data System (ADS)
Carveth, Carol; Clemens, D. P.; Pinnick, A.; Pavel, M.; Jameson, K.; Taylor, B.
2007-12-01
The new GPIPS website portal provides community users with an intuitive and powerful interface to query the data products of the Galactic Plane Infrared Polarization Survey. The website, which was built using PHP for the front end and MySQL for the database back end, allows users to issue queries based on galactic or equatorial coordinates, GPIPS-specific identifiers, polarization information, magnitude information, and several other attributes. The returns are presented in HTML tables, with the added option of either downloading or being emailed an ASCII file including the same or more information from the database. Other functionalities of the website include providing details of the status of the Survey (which fields have been observed or are planned to be observed), techniques involved in data collection and analysis, and descriptions of the database contents and names. For this initial launch of the website, users may access the GPIPS polarization point source catalog and the deep coadd photometric point source catalog. Future planned developments include a graphics-based method for querying the database, as well as tools to combine neighboring GPIPS images into larger image files for both polarimetry and photometry. This work is partially supported by NSF grant AST-0607500.
Xu, Huilei; Baroukh, Caroline; Dannenfelser, Ruth; Chen, Edward Y; Tan, Christopher M; Kou, Yan; Kim, Yujin E; Lemischka, Ihor R; Ma'ayan, Avi
2013-01-01
High content studies that profile mouse and human embryonic stem cells (m/hESCs) using various genome-wide technologies such as transcriptomics and proteomics are constantly being published. However, efforts to integrate such data to obtain a global view of the molecular circuitry in m/hESCs are lagging behind. Here, we present an m/hESC-centered database called Embryonic Stem Cell Atlas from Pluripotency Evidence integrating data from many recent diverse high-throughput studies including chromatin immunoprecipitation followed by deep sequencing, genome-wide inhibitory RNA screens, gene expression microarrays or RNA-seq after knockdown (KD) or overexpression of critical factors, immunoprecipitation followed by mass spectrometry proteomics and phosphoproteomics. The database provides web-based interactive search and visualization tools that can be used to build subnetworks and to identify known and novel regulatory interactions across various regulatory layers. The web-interface also includes tools to predict the effects of combinatorial KDs by additive effects controlled by sliders, or through simulation software implemented in MATLAB. Overall, the Embryonic Stem Cell Atlas from Pluripotency Evidence database is a comprehensive resource for the stem cell systems biology community. Database URL: http://www.maayanlab.net/ESCAPE
Making your database available through Wikipedia: the pros and cons.
Finn, Robert D; Gardner, Paul P; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis.
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
Making your database available through Wikipedia: the pros and cons
Finn, Robert D.; Gardner, Paul P.; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis. PMID:22144683
Brimblecombe, Julie; Wycherley, Thomas Philip
2017-01-01
Smartphone applications are increasingly being used to support nutrition improvement in community settings. However, there is a scarcity of practical literature to support researchers and practitioners in choosing or developing health applications. This work maps the features, key content, theoretical approaches, and methods of consumer testing of applications intended for nutrition improvement in community settings. A systematic, scoping review methodology was used to map published, peer-reviewed literature reporting on applications with a specific nutrition-improvement focus intended for use in the community setting. After screening, articles were grouped into 4 categories: dietary self-monitoring trials, nutrition improvement trials, application description articles, and qualitative application development studies. For mapping, studies were also grouped into categories based on the target population and aim of the application or program. Of the 4818 titles identified from the database search, 64 articles were included. The broad categories of features found to be included in applications generally corresponded to different behavior change support strategies common to many classic behavioral change models. Key content of applications generally focused on food composition, with tailored feedback most commonly used to deliver educational content. Consumer testing before application deployment was reported in just over half of the studies. Collaboration between practitioners and application developers promotes an appropriate balance of evidence-based content and functionality. This work provides a unique resource for program development teams and practitioners seeking to use an application for nutrition improvement in community settings. PMID:28298274
Tonkin, Emma; Brimblecombe, Julie; Wycherley, Thomas Philip
2017-03-01
Smartphone applications are increasingly being used to support nutrition improvement in community settings. However, there is a scarcity of practical literature to support researchers and practitioners in choosing or developing health applications. This work maps the features, key content, theoretical approaches, and methods of consumer testing of applications intended for nutrition improvement in community settings. A systematic, scoping review methodology was used to map published, peer-reviewed literature reporting on applications with a specific nutrition-improvement focus intended for use in the community setting. After screening, articles were grouped into 4 categories: dietary self-monitoring trials, nutrition improvement trials, application description articles, and qualitative application development studies. For mapping, studies were also grouped into categories based on the target population and aim of the application or program. Of the 4818 titles identified from the database search, 64 articles were included. The broad categories of features found to be included in applications generally corresponded to different behavior change support strategies common to many classic behavioral change models. Key content of applications generally focused on food composition, with tailored feedback most commonly used to deliver educational content. Consumer testing before application deployment was reported in just over half of the studies. Collaboration between practitioners and application developers promotes an appropriate balance of evidence-based content and functionality. This work provides a unique resource for program development teams and practitioners seeking to use an application for nutrition improvement in community settings. © 2017 American Society for Nutrition.
ERIC Educational Resources Information Center
Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.
1999-01-01
Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…
NASA Astrophysics Data System (ADS)
Endres, Christian P.; Schlemmer, Stephan; Schilke, Peter; Stutzki, Jürgen; Müller, Holger S. P.
2016-09-01
The Cologne Database for Molecular Spectroscopy, CDMS, was founded 1998 to provide in its catalog section line lists of mostly molecular species which are or may be observed in various astronomical sources (usually) by radio astronomical means. The line lists contain transition frequencies with qualified accuracies, intensities, quantum numbers, as well as further auxiliary information. They have been generated from critically evaluated experimental line lists, mostly from laboratory experiments, employing established Hamiltonian models. Separate entries exist for different isotopic species and usually also for different vibrational states. As of December 2015, the number of entries is 792. They are available online as ascii tables with additional files documenting information on the entries. The Virtual Atomic and Molecular Data Centre, VAMDC, was founded more than 5 years ago as a common platform for atomic and molecular data. This platform facilitates exchange not only between spectroscopic databases related to astrophysics or astrochemistry, but also with collisional and kinetic databases. A dedicated infrastructure was developed to provide a common data format in the various databases enabling queries to a large variety of databases on atomic and molecular data at once. For CDMS, the incorporation in VAMDC was combined with several modifications on the generation of CDMS catalog entries. Here we introduce related changes to the data structure and the data content in the CDMS. The new data scheme allows us to incorporate all previous data entries but in addition allows us also to include entries based on new theoretical descriptions. Moreover, the CDMS entries have been transferred into a mySQL database format. These developments within the VAMDC framework have in part been driven by the needs of the astronomical community to be able to deal efficiently with large data sets obtained with the Herschel Space Telescope or, more recently, with the Atacama Large Millimeter Array.
An end to end secure CBIR over encrypted medical database.
Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole
2016-08-01
In this paper, we propose a new secure content based image retrieval (SCBIR) system adapted to the cloud framework. This solution allows a physician to retrieve images of similar content within an outsourced and encrypted image database, without decrypting them. Contrarily to actual CBIR approaches in the encrypted domain, the originality of the proposed scheme stands on the fact that the features extracted from the encrypted images are themselves encrypted. This is achieved by means of homomorphic encryption and two non-colluding servers, we however both consider as honest but curious. In that way an end to end secure CBIR process is ensured. Experimental results carried out on a diabetic retinopathy database encrypted with the Paillier cryptosystem indicate that our SCBIR achieves retrieval performance as good as if images were processed in their non-encrypted form.
Text Mining to Support Gene Ontology Curation and Vice Versa.
Ruch, Patrick
2017-01-01
In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.
Whelan, Brendan; Moros, Eduardo G; Fahrig, Rebecca; Deye, James; Yi, Thomas; Woodward, Michael; Keall, Paul; Siewerdsen, Jeff H
2017-04-01
To produce and maintain a database of National Institutes of Health (NIH) funding of the American Association of Physicists in Medicine (AAPM) members, to perform a top-level analysis of these data, and to make these data (hereafter referred to as the AAPM research database) available for the use of the AAPM and its members. NIH-funded research dating back to 1985 is available for public download through the NIH exporter website, and AAPM membership information dating back to 2002 was supplied by the AAPM. To link these two sources of data, a data mining algorithm was developed in Matlab. The false-positive rate was manually estimated based on a random sample of 100 records, and the false-negative rate was assessed by comparing against 99 member-supplied PI_ID numbers. The AAPM research database was queried to produce an analysis of trends and demographics in research funding dating from 2002 to 2015. A total of 566 PI_ID numbers were matched to AAPM members. False-positive and -negative rates were respectively 4% (95% CI: 1-10%, N = 100) and 10% (95% CI: 5-18%, N = 99). Based on analysis of the AAPM research database, in 2015 the NIH awarded $USD 110M to members of the AAPM. The four NIH institutes which historically awarded the most funding to AAPM members were the National Cancer Institute, National Institute of Biomedical Imaging and Bioengineering, National Heart Lung and Blood Institute, and National Institute of Neurological Disorders and Stroke. In 2015, over 85% of the total NIH research funding awarded to AAPM members was via these institutes, representing 1.1% of their combined budget. In the same year, 2.0% of AAPM members received NIH funding for a total of $116M, which is lower than the historic mean of $120M (in 2015 USD). A database of NIH-funded research awarded to AAPM members has been developed and tested using a data mining approach, and a top-level analysis of funding trends has been performed. Current funding of AAPM members is lower than the historic mean. The database will be maintained by members of the Working group for the development of a research database (WGDRD) on an annual basis, and is available to the AAPM, its committees, working groups, and members for download through the AAPM electronic content website. A wide range of questions regarding financial and demographic funding trends can be addressed by these data. This report has been approved for publication by the AAPM Science Council. © 2017 American Association of Physicists in Medicine.
Public Opinion Poll Question Databases: An Evaluation
ERIC Educational Resources Information Center
Woods, Stephen
2007-01-01
This paper evaluates five polling resource: iPOLL, Polling the Nations, Gallup Brain, Public Opinion Poll Question Database, and Polls and Surveys. Content was evaluated on disclosure standards from major polling organizations, scope on a model for public opinion polls, and presentation on a flow chart discussing search limitations and usability.
Trials by Juries: Suggested Practices for Database Trials
ERIC Educational Resources Information Center
Ritterbush, Jon
2012-01-01
Librarians frequently utilize product trials to assess the content and usability of a database prior to committing funds to a new subscription or purchase. At the 2012 Electronic Resources and Libraries Conference in Austin, Texas, three librarians presented a panel discussion on their institutions' policies and practices regarding database…
2002-01-01
to the OODBMS approach. The ORDBMS approach produced such research prototypes as Postgres [155], and Starburst [67] and commercial products such as...Kemnitz. The POSTGRES Next-Generation Database Management System. Communications of the ACM, 34(10):78–92, 1991. [156] Michael Stonebreaker and Dorothy
NASA Technical Reports Server (NTRS)
Serke, David J.; King, Michael Christopher; Hansen, Reid; Reehorst, Andrew L.
2016-01-01
National Aeronautics and Space Administration (NASA) and the National Center for Atmospheric Research (NCAR) have developed an icing remote sensing technology that has demonstrated skill at detecting and classifying icing hazards in a vertical column above an instrumented ground station. This technology has recently been extended to provide volumetric coverage surrounding an airport. Building on the existing vertical pointing system, the new method for providing volumetric coverage utilizes a vertical pointing cloud radar, a multi-frequency microwave radiometer with azimuth and elevation pointing, and a NEXRAD radar. The new terminal area icing remote sensing system processes the data streams from these instruments to derive temperature, liquid water content, and cloud droplet size for each examined point in space. These data are then combined to ultimately provide icing hazard classification along defined approach paths into an airport. To date, statistical comparisons of the vertical profiling technology have been made to Pilot Reports and Icing Forecast Products. With the extension into relatively large area coverage and the output of microphysical properties in addition to icing severity, the use of these comparators is not appropriate and a more rigorous assessment is required. NASA conducted a field campaign during the early months of 2015 to develop a database to enable the assessment of the new terminal area icing remote sensing system and further refinement of terminal area icing weather information technologies in general. In addition to the ground-based remote sensors listed earlier, in-situ icing environment measurements by weather balloons were performed to produce a comprehensive comparison database. Balloon data gathered consisted of temperature, humidity, pressure, super-cooled liquid water content, and 3-D position with time. Comparison data plots of weather balloon and remote measurements, weather balloon flight paths, bulk comparisons of integrated liquid water content and icing cloud extent agreement, and terminal-area hazard displays are presented. Discussions of agreement quality and paths for future development are also included.
NASA Astrophysics Data System (ADS)
Reisser, Moritz; Purves, Ross; Schmidt, Michael W. I.; Abiven, Samuel
2016-08-01
Pyrogenic carbon (PyC) is considered one of the most stable components in soil and can represent more than 30% of total soil organic carbon (SOC). However, few estimates of global PyC stock or distribution exist and thus PyC is not included in any global carbon cycle models, despite its potential major relevance for the soil pool. To obtain a global picture, we reviewed the literature for published PyC content in SOC data. We generated the first PyC database including more than 560 measurements from 55 studies. Despite limitations due to heterogeneous distribution of the studied locations and gaps in the database, we were able to produce a worldwide PyC inventory. We found that global PyC represent on average 13.7% of the SOC and can be even up to 60%, making it one of the largest groups of identifiable compounds in soil, together with polysaccharides. We observed a consistent range of PyC content in SOC, despite the diverse methods of quantification. We tested the PyC content against different environmental explanatory variables: fire and land use (fire characteristics, land use, net primary productivity), climate (temperature, precipitation, climatic zones, altitude) and pedogenic properties (clay content, pH, SOC content). Surprisingly, soil properties explain PyC content the most. Soils with clay content higher than 50% contain significantly more PyC (> 30% of the SOC) than with clay content lower than 5% (< 6% of the SOC). Alkaline soils contain at least 50% more PyC than acidic soils. Furthermore, climatic conditions, represented by climatic zone or mean temperature or precipitation, correlate significantly with the PyC content. By contrast, fire characteristics could only explain PyC content, if site-specific information was available. Datasets derived from remote sensing did not explain the PyC content. To show the potential of this database, we used it in combination with other global datasets to create a global worldwide PyC content and a stock estimation, which resulted in around 200Pg PyC for the uppermost 2 meters. These modelled estimates indicated a clear mismatch between the location of the current PyC studies and the geographical zones where we expect high PyC stocks.
Tobacco imagery in video games: ratings and gamer recall.
Forsyth, Susan R; Malone, Ruth E
2016-09-01
To assess whether tobacco content found in video games was appropriately labelled for tobacco-related content by the Entertainment and Software Ratings Board (ESRB). Sixty-five gamer participants (self-identified age range 13-50) were interviewed in-person (n=25) or online (n=40) and asked (A) to list favourite games and (B) to name games that they could recall containing tobacco content. The ESRB database was searched for all games mentioned to ascertain whether they had been assigned tobacco-related content descriptors. Games were independently assessed for tobacco content by examining user-created game wiki sites and watching YouTube videos of gameplay. Games with tobacco-related ESRB content descriptors and/or with tobacco imagery verified by researchers were considered to contain tobacco content. Games identified by participants as including tobacco but lacking verifiable tobacco content were treated as not containing tobacco content. Participants recalled playing 140 unique games, of which 118 were listed in the ESRB database. Participants explicitly recalled tobacco content in 31% (37/118) of the games, of which 94% (35/37) included independently verified tobacco content. Only 8% (9/118) of the games had received ESRB tobacco-related content descriptors, but researchers verified that 42% (50/118) contained such content; 42% (49/118) of games were rated 'M' for mature (content deemed appropriate for ages 17+). Of these, 76% (37/49) contained verified tobacco content; however, only 4% (2/49) received ESRB tobacco-related content descriptors. Gamers are exposed to tobacco imagery in many video games. The ESRB is not a reliable source for determining whether video games contain tobacco imagery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Maalouf, Joyce; Cogswell, Mary E.; Yuan, Keming; Martin, Carrie; Gillespie, Cathleen; Ahuja, Jaspreet KC; Pehrsson, Pamela; Merritt, Robert
2015-01-01
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods (e.g. white bread) were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 National Nutrient Database for Standard Reference(SR 26). Sentinel Foods are foods identified by USDA to be monitored as primary indicators to assess the changes in the sodium content of commercially processed foods from stores and restaurants. Overall, 937 products were evaluated in the CDC PFD, and between 3 (one brand of ready-to-eat cereal) and 126 products (white bread) were evaluated per selected food. The mean sodium concentrations of 17 of the 23 (74%) selected foods in the CDC PFD were 90%–110% of the mean sodium concentrations in SR 26 and differences in sodium concentration were statistically significant for 6 Sentinel Foods. The sodium concentration of most of the Sentinel Foods, as selected in the PFD, appeared to represent the sodium concentrations of the corresponding food category. The results of our study help improve the understanding of how nutrition information compares between national analytic values and the label and whether the selected Sentinel Foods represent their corresponding food category as indicators for assessment of change of the sodium content in the food supply. PMID:26484010
Involving Practicing Scientists in K-12 Science Teacher Professional Development
NASA Astrophysics Data System (ADS)
Bertram, K. B.
2011-12-01
The Science Teacher Education Program (STEP) offered a unique framework for creating professional development courses focused on Arctic research from 2006-2009. Under the STEP framework, science, technology, engineering, and math (STEM) training was delivered by teams of practicing Arctic researchers in partnership with master teachers with 20+ years experience teaching STEM content in K-12 classrooms. Courses based on the framework were offered to educators across Alaska. STEP offered in-person summer-intensive institutes and follow-on audio-conferenced field-test courses during the academic year, supplemented by online scientist mentorship for teachers. During STEP courses, teams of scientists offered in-depth STEM content instruction at the graduate level for teachers of all grade levels. STEP graduate-level training culminated in the translation of information and data learned from Arctic scientists into standard-aligned lessons designed for immediate use in K-12 classrooms. This presentation will focus on research that explored the question: To what degree was scientist involvement beneficial to teacher training and to what degree was STEP scientist involvement beneficial to scientist instructors? Data sources reveal consistently high levels of ongoing (4 year) scientist and teacher participation; high STEM content learning outcomes for teachers; high STEM content learning outcomes for students; high ratings of STEP courses by scientists and teachers; and a discussion of the reasons scientists indicate they benefited from STEP involvement. Analyses of open-ended comments by teachers and scientists support and clarify these findings. A grounded theory approach was used to analyze teacher and scientist qualitative feedback. Comments were coded and patterns analyzed in three databases. The vast majority of teacher open-ended comments indicate that STEP involvement improved K-12 STEM classroom instruction, and the vast majority of scientist open-ended comments focus on the benefits scientists received from networking with K-12 teachers. The classroom lessons resulting from STEP have been so popular among teachers, the Alaska Department of Education and Early Development recently contracted with the PI to create a website that will make the STEP database open to teachers across Alaska. When the Alaska Department of Education and Early Development launched the new website in August 2011, the name of the STEP program was changed to the Alaska K-12 Science Curricular Initiative (AKSCI). The STEP courses serving as the foundation to the new AKSCI site are located under the "History" tab of the new website.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
NeisseriaBase: a specialised Neisseria genomic resource and analysis platform.
Zheng, Wenning; Mutha, Naresh V R; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah; Choo, Siew Woh
2016-01-01
Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my.
NeisseriaBase: a specialised Neisseria genomic resource and analysis platform
Zheng, Wenning; Mutha, Naresh V.R.; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S.; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah
2016-01-01
Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my. PMID:27017950
de Medeiros, Ana Claudia Torres; da Nóbrega, Maria Miriam Lima; Rodrigues, Rosalina Aparecida Partezani; Fernandes, Maria das Graças Melo
2013-01-01
To develop nursing diagnosis statements for the elderly based on the Activities of Living Model and on the International Classification for Nursing Practice. Descriptive and exploratory study, put in practice in two stages: 1) collection of terms and concepts that are considered clinically and culturally relevant for nursing care delivered to the elderly, in order to develop a database of terms and 2) development of nursing diagnosis statements for the elderly in primary health care, based on the guidelines of the International Council of Nurses and on the database of terms for nursing practice involving the elderly. 414 terms were identified and submitted to the content validation process, with the participation of ten nursing experts, which resulted in 263 validated terms. These terms were submitted to cross mapping with the terms of the International Classification for Nursing Practice, resulting in the identification of 115 listed terms and 148 non-listed terms, which constituted the database of terms, from which 127 nursing diagnosis statements were prepared and classified into factors that affect the performance of the elderly's activities of living - 69 into biological factors, 19 into psychological, 31 into sociocultural, five into environmental, and three into political-economic factors. After clinical validation, these statements can serve as a guide for nursing consultations with elderly patients, without ignoring clinical experience, critical thinking and decision-making.
Mitigation of Fluorosis - A Review
Dodamani, Arun S.; Jadhav, Harish C.; Naik, Rahul G.; Deshmukh, Manjiri A.
2015-01-01
Fluoride is required for normal development and growth of the body. It is found in plentiful quantity in environment and fluoride content in drinking water is largest contributor to the daily fluoride intake. The behaviour of fluoride ions in the human organism can be regarded as that of “double-edged sword”. Fluoride is beneficial in small amounts but toxic in large amounts. Excessive consumption of fluorides in various forms leads to development of fluorosis. Fluorosis is major health problem in 24 countries, including India, which lies in the geographical fluoride belt. Various technologies are being used to remove fluoride from water but still the problem has not been rooted out. The purpose of this paper is to review the available treatment modalities for fluorosis, available technologies for fluoride removal from water and ongoing fluorosis mitigation programs based on literature survey. Medline was the primary database used in the literature search. Other databases included: PubMed, Web of Science, Google Scholar, WHO, Ebscohost, Science Direct, Google Search Engine, etc. PMID:26266235
NASA Astrophysics Data System (ADS)
Adrian, Brian; Zollman, Dean; Stevens, Scott
2006-02-01
To demonstrate how state-of-the-art video databases can address issues related to the lack of preparation of many physics teachers, we have created the prototype Physics Teaching Web Advisory (Pathway). Pathway's Synthetic Interviews and related video materials are beginning to provide pre-service and out-of-field in-service teachers with much-needed professional development and well-prepared teachers with new perspectives on teaching physics. The prototype was limited to a demonstration of the systems. Now, with an additional grant we will extend the system and conduct research and evaluation on its effectiveness. This project will provide virtual expert help on issues of pedagogy and content. In particular, the system will convey, by example and explanation, contemporary ideas about the teaching of physics and applications of physics education research. The research effort will focus on the value of contemporary technology to address the continuing education of teachers who are teaching in a field in which they have not been trained.
NASA Astrophysics Data System (ADS)
Zhan, Aibin; Bao, Zhenmin; Wang, Mingling; Chang, Dan; Yuan, Jian; Wang, Xiaolong; Hu, Xiaoli; Liang, Chengzhu; Hu, Jingjie
2008-05-01
The EST database of the Pacific abalone ( Haliotis discus) was mined for developing microsatellite markers. A total of 1476 EST sequences were registered in GenBank when data mining was performed. Fifty sequences (approximately 3.4%) were found to contain one or more microsatellites. Based on the length and GC content of the flanking regions, cluster analysis and BLASTN, 13 microsatellite-containing ESTs were selected for PCR primer design. The results showed that 10 out of 13 primer pairs could amplify scorable PCR products and showed polymorphism. The number of alleles ranged from 2 to 13 and the values of H o and H e varied from 0.1222 to 0.8611 and 0.2449 to 0.9311, respectively. No significant linkage disequilibrium (LD) between any pairs of these loci was found, and 6 of 10 loci conformed to the Hardy-Weinberg equilibrium (HWE). These EST-SSRs are therefore potential tools for studies of intraspecies variation and hybrid identification.
Tang, Qi; Li, Jiajia; Sun, Mei; Lv, Jun; Gai, Ruoyan; Mei, Lin; Xu, Lingzhong
2015-02-01
Over the past few decades, the field of food security has witnessed numerous problems and incidents that have garnered public attention. Given this serious situation, the food traceability system (FTS) has become part of the expanding food safety continuum to reduce the risk of food safety problems. This article reviews a great deal of the related literature and results from previous studies of FTS to corroborate this contention. This article describes the development and benefits of FTS in developed countries like the United States of America (USA), Japan, and some European countries. Problems with existing FTS in China are noted, including a lack of a complete database, inadequate laws and regulations, and lagging technological research into FTS. This article puts forward several suggestions for the future, including improvement of information websites, clarification of regulatory responsibilities, and promotion of technological research.
How Documentalists Update SIMBAD
NASA Astrophysics Data System (ADS)
Buga, M.; Bot, C.; Brouty, M.; Bruneau, C.; Brunet, C.; Cambresy, L.; Eisele, A.; Genova, F.; Lesteven, S.; Loup, C.; Neuville, M.; Oberto, A.; Ochsenbein, F.; Perret, E.; Siebert, A.; Son, E.; Vannier, P.; Vollmer, B.; Vonflie, P.; Wenger, M.; Woelfel, F.
2015-04-01
The Strasbourg astronomical Data Center (CDS) was created in 1972 and has had a major role in astronomy for more than forty years. CDS develops a service called SIMBAD that provides basic data, cross-identifications, bibliography, and measurements for astronomical objects outside the solar system. It brings to the scientific community an added value to content which is updated daily by a team of documentalists working together in close collaboration with astronomers and IT specialists. We explain how the CDS staff updates SIMBAD with object citations in the main astronomical journals, as well as with astronomical data and measurements. We also explain how the identification is made between the objects found in the literature and those already existing in SIMBAD. We show the steps followed by the documentalist team to update the database using different tools developed at CDS, like the sky visualizer Aladin, and the large catalogues and survey database VizieR. As a direct result of this teamwork, SIMBAD integrates almost 10.000 bibliographic references per year. The service receives more than 400.000 queries per day.
Vidaña-Pérez, Dèsirée; Braverman-Bronstein, Ariela; Basto-Abreu, Ana; Barrientos-Gutierrez, Inti; Hilscher, Rainer; Barrientos-Gutierrez, Tonatiuh
2018-01-11
Background: Video games are widely used by children and adolescents and have become a significant source of exposure to sexual content. Despite evidence of the important role of media in the development of sexual attitudes and behaviours, little attention has been paid to monitor sexual content in video games. Methods: Data was obtained about sexual content and rating for 23722 video games from 1994 to 2013 from the Entertainment Software Rating Board database; release dates and information on the top 100 selling video games was also obtained. A yearly prevalence of sexual content according to rating categories was calculated. Trends and comparisons were estimated using Joinpoint regression. Results: Sexual content was present in 13% of the video games. Games rated 'Mature' had the highest prevalence of sexual content (34.5%) followed by 'Teen' (30.7%) and 'E10+' (21.3%). Over time, sexual content decreased in the 'Everyone' category, 'E10+' maintained a low prevalence and 'Teen' and 'Mature' showed a marked increase. Both top and non-top video games showed constant increases, with top selling video games having 10.1% more sexual content across the period of study. Conclusion: Over the last 20 years, the prevalence of sexual content has increased in video games with a 'Teen' or 'Mature' rating. Further studies are needed to quantify the potential association between sexual content in video games and sexual behaviour in children and adolescents.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
NASA Astrophysics Data System (ADS)
Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.
2016-12-01
There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.
Shaath, M Kareem; Yeranosian, Michael G; Ippolito, Joseph A; Adams, Mark R; Sirkin, Michael S; Reilly, Mark C
2018-05-02
Orthopaedic trauma fellowship applicants use online-based resources when researching information on potential U.S. fellowship programs. The 2 primary sources for identifying programs are the Orthopaedic Trauma Association (OTA) database and the San Francisco Match (SF Match) database. Previous studies in other orthopaedic subspecialty areas have demonstrated considerable discrepancies among fellowship programs. The purpose of this study was to analyze content and availability of information on orthopaedic trauma surgery fellowship web sites. The online databases of the OTA and SF Match were reviewed to determine the availability of embedded program links or external links for the included programs. Thereafter, a Google search was performed for each program individually by typing the program's name, followed by the term "orthopaedic trauma fellowship." All identified fellowship web sites were analyzed for accessibility and content. Web sites were evaluated for comprehensiveness in mentioning key components of the orthopaedic trauma surgery curriculum. By consensus, we refined the final list of variables utilizing the methodology of previous studies on the topic. We identified 54 OTA-accredited fellowship programs, offering 87 positions. The majority (94%) of programs had web sites accessible through a Google search. Of the 51 web sites found, all (100%) described their program. Most commonly, hospital affiliation (88%), operative experiences (76%), and rotation overview (65%) were listed, and, least commonly, interview dates (6%), selection criteria (16%), on-call requirements (20%), and fellow evaluation criteria (20%) were listed. Programs with ≥2 fellows provided more information with regard to education content (p = 0.0001) and recruitment content (p = 0.013). Programs with Accreditation Council for Graduate Medical Education (ACGME) accreditation status also provided greater information with regard to education content (odds ratio, 4.0; p = 0.0001). Otherwise, no differences were seen by region, residency affiliation, medical school affiliation, or hospital affiliation. The SF Match and OTA databases provide few direct links to fellowship web sites. Individual program web sites do not effectively and completely convey information about the programs. The Internet is an underused resource for fellow recruitment. The lack of information on these sites allows for future opportunity to optimize this resource.
Computer-Assisted Telephone Screening: A New System for Patient Evaluation and Recruitment
Radcliffe, Jeanne M.; Latham, Georgia S.; Sunderland, Trey; Lawlor, Brian A.
1990-01-01
Recruitment of subjects for research studies is a time consuming process for any research coordinator. This paper introduces three computerized databases designed to help screen potential research candidates by telephone. The three programs discussed are designed to evaluate specific populations: geriatric patients (i.e. Alzheimer's patients), patients with affective disorders and normal volunteers. The interview content, software development, and the utility of these programs is discussed with particular focus on how they can be helpful in the research setting.
Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California
Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.
2006-01-01
The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.
DamaGIS: a multisource geodatabase for collection of flood-related damage data
NASA Astrophysics Data System (ADS)
Saint-Martin, Clotilde; Javelle, Pierre; Vinet, Freddy
2018-06-01
Every year in France, recurring flood events result in several million euros of damage, and reducing the heavy consequences of floods has become a high priority. However, actions to reduce the impact of floods are often hindered by the lack of damage data on past flood events. The present paper introduces a new database for collection and assessment of flood-related damage. The DamaGIS database offers an innovative bottom-up approach to gather and identify damage data from multiple sources, including new media. The study area has been defined as the south of France considering the high frequency of floods over the past years. This paper presents the structure and contents of the database. It also presents operating instructions in order to keep collecting damage data within the database. This paper also describes an easily reproducible method to assess the severity of flood damage regardless of the location or date of occurrence. A first analysis of the damage contents is also provided in order to assess data quality and the relevance of the database. According to this analysis, despite its lack of comprehensiveness, the DamaGIS database presents many advantages. Indeed, DamaGIS provides a high accuracy of data as well as simplicity of use. It also has the additional benefit of being accessible in multiple formats and is open access. The DamaGIS database is available at https://doi.org/10.5281/zenodo.1241089.
Wavelet optimization for content-based image retrieval in medical databases.
Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C
2010-04-01
We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.
Evidence generation from healthcare databases: recommendations for managing change.
Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C
2016-07-01
There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lancaster, Jeff; Dillard, Michael; Alves, Erin; Olofinboba, Olu
2014-01-01
The User Guide details the Access Database provided with the Flight Deck Interval Management (FIM) Display Elements, Information, & Annunciations program. The goal of this User Guide is to support ease of use and the ability to quickly retrieve and select items of interest from the Database. The Database includes FIM Concepts identified in a literature review preceding the publication of this document. Only items that are directly related to FIM (e.g., spacing indicators), which change or enable FIM (e.g., menu with control buttons), or which are affected by FIM (e.g., altitude reading) are included in the database. The guide has been expanded from previous versions to cover database structure, content, and search features with voiced explanations.
Valdez-Moreno, Martha; Quintal-Lizama, Carolina; Gómez-Lozano, Ricardo; García-Rivas, María Del Carmen
2012-01-01
In the Mexican Caribbean, the exotic lionfish Pterois volitans has become a species of great concern because of their predatory habits and rapid expansion onto the Mesoamerican coral reef, the second largest continuous reef system in the world. This is the first report of DNA identification of stomach contents of lionfish using the barcode of life reference database (BOLD). We confirm with barcoding that only Pterois volitans is apparently present in the Mexican Caribbean. We analyzed the stomach contents of 157 specimens of P. volitans from various locations in the region. Based on DNA matches in the Barcode of Life Database (BOLD) and GenBank, we identified fishes from five orders, 14 families, 22 genera and 34 species in the stomach contents. The families with the most species represented were Gobiidae and Apogonidae. Some prey taxa are commercially important species. Seven species were new records for the Mexican Caribbean: Apogon mosavi, Coryphopterus venezuelae, C. thrix, C. tortugae, Lythrypnus minimus, Starksia langi and S. ocellata. DNA matches, as well as the presence of intact lionfish in the stomach contents, indicate some degree of cannibalism, a behavior confirmed in this species by the first time. We obtained 45 distinct crustacean prey sequences, from which only 20 taxa could be identified from the BOLD and GenBank databases. The matches were primarily to Decapoda but only a single taxon could be identified to the species level, Euphausia americana. This technique proved to be an efficient and useful method, especially since prey species could be identified from partially-digested remains. The primary limitation is the lack of comprehensive coverage of potential prey species in the region in the BOLD and GenBank databases, especially among invertebrates.
Valdez-Moreno, Martha; Quintal-Lizama, Carolina; Gómez-Lozano, Ricardo; García-Rivas, María del Carmen
2012-01-01
Background In the Mexican Caribbean, the exotic lionfish Pterois volitans has become a species of great concern because of their predatory habits and rapid expansion onto the Mesoamerican coral reef, the second largest continuous reef system in the world. This is the first report of DNA identification of stomach contents of lionfish using the barcode of life reference database (BOLD). Methodology/Principal Findings We confirm with barcoding that only Pterois volitans is apparently present in the Mexican Caribbean. We analyzed the stomach contents of 157 specimens of P. volitans from various locations in the region. Based on DNA matches in the Barcode of Life Database (BOLD) and GenBank, we identified fishes from five orders, 14 families, 22 genera and 34 species in the stomach contents. The families with the most species represented were Gobiidae and Apogonidae. Some prey taxa are commercially important species. Seven species were new records for the Mexican Caribbean: Apogon mosavi, Coryphopterus venezuelae, C. thrix, C. tortugae, Lythrypnus minimus, Starksia langi and S. ocellata. DNA matches, as well as the presence of intact lionfish in the stomach contents, indicate some degree of cannibalism, a behavior confirmed in this species by the first time. We obtained 45 distinct crustacean prey sequences, from which only 20 taxa could be identified from the BOLD and GenBank databases. The matches were primarily to Decapoda but only a single taxon could be identified to the species level, Euphausia americana. Conclusions/Significance This technique proved to be an efficient and useful method, especially since prey species could be identified from partially-digested remains. The primary limitation is the lack of comprehensive coverage of potential prey species in the region in the BOLD and GenBank databases, especially among invertebrates. PMID:22675470
[Teleteaching in otorhinolaryngology. Part 2: the European database "medicstream"].
Fuchs, M; Strauß, G; Werner, T; Bootz, F
2003-02-01
Searching the medical literature is mainly characterized by the use of databases.The time for reviewing and printing a paper causes a delay between the first presentation of scientific results in a lecture and the appearance of the appropriate journal publication. Moreover, the possibilities for the integration of multimedia data are limited.Because of the development of new compression algorithms for audiovisual digitization software, the near simultaneous transmission of scientific presentations, both visually and aurally, has been possible since 1999.The section telemedicine of the interdisciplinary study group on image guided surgical navigation from the University of Leipzig has developed "medicstream" as a unique, audiovisual, scientific internetdatabase. It allows the documentation of presentations at congresses and courses, including the discussion,over a freely accessible, widely available homepage, using a home developed streaming technology.All presentations are examined by an authorized editorial committee consisting of representatives with different medical specializations for content before the admission to the data base.A total of 392 presentations from seven scientific meetings can be selected by an integrated search machine and viewed with Windows Media Player or Real Player using a conventional or faster internet connection to the homepage www.medicstream.de.The quality of the audiovisual transmission depends on the receiver-lateral data transmission rate,whereby the minimum variant of 56 kB/s is characterized by good detectability of the contents. In an analysis period of 242 days,we registered 44,199 accesses and 4,488 attendances.The telemedicine data base "medicstream" can optimize quality and extend medical education by live-streaming as well as by archiving scientific presentations as audiovisual data.
Sunshine Act: shedding light on inaccurate disclosures at a gynecologic annual meeting.
Thompson, Jennifer C; Volpe, Katherine A; Bridgewater, Lindsay K; Qeadan, Fares; Dunivan, Gena C; Komesu, Yuko M; Cichowski, Sara B; Jeppson, Peter C; Rogers, Rebecca G
2016-11-01
Physicians and hospital systems often have relationships with biomedical manufacturers to develop new ideas, products, and further education. Because this relationship can influence medical research and practice, reporting disclosures are necessary to reveal any potential bias and inform consumers. The Sunshine Act was created to develop a new reporting system of these financial relationships called the Open Payments database. Currently all disclosures submitted with research to scientific meetings are at the discretion of the physician. We hypothesized that financial relationships between authors and the medical industry are underreported. We aimed to describe concordance between physicians' financial disclosures listed in the abstract book from the 41st annual scientific meeting of the Society of Gynecologic Surgeons to physician payments reported to the Center for Medicaid and Medicare Services Open Payments database for the same year. Authors and scientific committee members responsible for the content of the 41st annual scientific meeting of the Society of Gynecologic Surgeons were identified from the published abstract book; each abstract listed disclosures for each author. Abstract disclosures were compared with the transactions recorded on the Center for Medicaid and Medicare Services Open Payments database for concordance. Two authors reviewed each nondisclosed Center for Medicaid and Medicare Services listing to determine the relatedness between the company listed on the Center for Medicaid and Medicare Services and abstract content. Abstracts and disclosures of 335 physicians meeting inclusion criteria were reviewed. A total of 209 of 335 physicians (62%) had transactions reported in the Center for Medicaid and Medicare Services, which totaled $1.99 million. Twenty-four of 335 physicians (7%) listed companies with their abstracts; 5 of those 24 physicians were concordant with the Center for Medicaid and Medicare Services. The total amount of all nondisclosed transactions was $1.3 million. Transactions reported in the Center for Medicaid and Medicare Services associated with a single physician ranged from $11.72 to $405,903.36. Of the 209 physicians with Center for Medicaid and Medicare Services transactions that were not disclosed, the majority (68%) had at least 1 company listed in the Center for Medicaid and Medicare Services that was determined after review to be related to the subject of their abstract. Voluntary disclosure of financial relationships was poor, and the majority of unlisted disclosures in the abstract book were companies related to the scientific content of the abstract. Better transparency is needed by physicians responsible for the content presented at gynecological scientific meetings. Copyright © 2016 Elsevier Inc. All rights reserved.
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Ito, Kei
2010-01-01
Digital brain atlas is a kind of image database that specifically provide information about neurons and glial cells in the brain. It has various advantages that are unmatched by conventional paper-based atlases. Such advantages, however, may become disadvantages if appropriate cares are not taken. Because digital atlases can provide unlimited amount of data, they should be designed to minimize redundancy and keep consistency of the records that may be added incrementally by different staffs. The fact that digital atlases can easily be revised necessitates a system to assure that users can access previous versions that might have been cited in papers at a particular period. To inherit our knowledge to our descendants, such databases should be maintained for a very long period, well over 100 years, like printed books and papers. Technical and organizational measures to enable long-term archive should be considered seriously. Compared to the initial development of the database, subsequent efforts to increase the quality and quantity of its contents are not regarded highly, because such tasks do not materialize in the form of publications. This fact strongly discourages continuous expansion of, and external contributions to, the digital atlases after its initial launch. To solve these problems, the role of the biocurators is vital. Appreciation of the scientific achievements of the people who do not write papers, and establishment of the secure academic career path for them, are indispensable for recruiting talents for this very important job. PMID:20661458
Sodium and sugar in complementary infant and toddler foods sold in the United States.
Cogswell, Mary E; Gunn, Janelle P; Yuan, Keming; Park, Sohyun; Merritt, Robert
2015-03-01
To evaluate the sodium and sugar content of US commercial infant and toddler foods. We used a 2012 nutrient database of 1074 US infant and toddler foods and drinks developed from a commercial database, manufacturer Web sites, and major grocery stores. Products were categorized on the basis of their main ingredients and the US Food and Drug Administration's reference amounts customarily consumed per eating occasion (RACC). Sodium and sugar contents and presence of added sugars were determined. All but 2 of the 657 infant vegetables, dinners, fruits, dry cereals, and ready-to-serve mixed grains and fruits were low sodium (≤140 mg/RACC). The majority of these foods did not contain added sugars; however, 41 of 79 infant mixed grains and fruits contained ≥1 added sugar, and 35 also contained >35% calories from sugar. Seventy-two percent of 72 toddler dinners were high in sodium content (>210 mg/RACC). Toddler dinners contained an average of 2295 mg of sodium per 1000 kcal (sodium 212 mg/100 g). Savory infant/toddler snacks (n = 34) contained an average of sodium 1382 mg/1000 kcal (sodium 486 mg/100 g); 1 was high sodium. Thirty-two percent of toddler dinners and the majority of toddler cereal bars/breakfast pastries, fruit, and infant/toddler snacks, desserts, and juices contained ≥1 added sugar. Commercial toddler foods and infant or toddler snacks, desserts, and juice drinks are of potential concern due to sodium or sugar content. Pediatricians should advise parents to look carefully at labels when selecting commercial toddler foods and to limit salty snacks, sweet desserts, and juice drinks. Copyright © 2015 by the American Academy of Pediatrics.
Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1
2013-05-14
include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric
Tag Content Access Control with Identity-based Key Exchange
NASA Astrophysics Data System (ADS)
Yan, Liang; Rong, Chunming
2010-09-01
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
Fatty acid, cholesterol, vitamin, and mineral content of cooked beef cuts from a national study
USDA-ARS?s Scientific Manuscript database
The U.S. Department of Agriculture (USDA) provides foundational nutrient data for U.S. and international databases. For currency of retail beef data in USDA’s database, a nationwide comprehensive study obtained samples by primal categories using a statistically based sampling plan, resulting in 72 ...
Dangers of Noncritical Use of Historical Plague Data
Roosen, Joris
2018-01-01
Researchers have published several articles using historical data sets on plague epidemics using impressive digital databases that contain thousands of recorded outbreaks across Europe over the past several centuries. Through the digitization of preexisting data sets, scholars have unprecedented access to the historical record of plague occurrences. However, although these databases offer new research opportunities, noncritical use and reproduction of preexisting data sets can also limit our understanding of how infectious diseases evolved. Many scholars have performed investigations using Jean-Noël Biraben’s data, which contains information on mentions of plague from various kinds of sources, many of which were not cited. When scholars fail to apply source criticism or do not reflect on the content of the data they use, the reliability of their results becomes highly questionable. Researchers using these databases going forward need to verify and restrict content spatially and temporally, and historians should be encouraged to compile the work.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
GeneStoryTeller: a mobile app for quick and comprehensive information retrieval of human genes
Eleftheriou, Stergiani V.; Bourdakou, Marilena M.; Athanasiadis, Emmanouil I.; Spyrou, George M.
2015-01-01
In the last few years, mobile devices such as smartphones and tablets have become an integral part of everyday life, due to their software/hardware rapid development, as well as the increased portability they offer. Nevertheless, up to now, only few Apps have been developed in the field of bioinformatics, capable to perform fast and robust access to services. We have developed the GeneStoryTeller, a mobile application for Android platforms, where users are able to instantly retrieve information regarding any recorded human gene, derived from eight publicly available databases, as a summary story. Complementary information regarding gene–drugs interactions, functional annotation and disease associations for each selected gene is also provided in the gene story. The most challenging part during the development of the GeneStoryTeller was to keep balance between storing data locally within the app and obtaining the updated content dynamically via a network connection. This was accomplished with the implementation of an administrative site where data are curated and synchronized with the application requiring a minimum human intervention. Database URL: http://bioserver-3.bioacademy.gr/Bioserver/GeneStoryTeller/. PMID:26055097
NASA Video Catalog. Supplement 12
NASA Technical Reports Server (NTRS)
2002-01-01
This report lists 1878 video productions from the NASA STI Database. This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The listing of the entries is arranged by STAR categories. A complete Table of Contents describes the scope of each category. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.
[Design and establishment of modern literature database about acupuncture Deqi].
Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao
2015-02-01
A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.
Description of 'REQUEST-KYUSHYU' for KYUKEICHO regional data base
NASA Astrophysics Data System (ADS)
Takimoto, Shin'ichi
Kyushu Economic Research Association (a foundational juridical person) initiated the regional database services, ' REQUEST-Kyushu ' recently. It is the full scale databases compiled based on the information and know-hows which the Association has accumulated over forty years. It covers the regional information database for journal and newspaper articles, and statistical information database for economic statistics. As to the former database it is searched on a personal computer and then a search result (original text) is sent through a facsimile. As to the latter, it is also searched on a personal computer where the data is processed, edited or downloaded. This paper describes characteristics, content and the system outline of 'REQUEST-Kyushu'.
Wright, T.L.; Takahashi, T.J.
1998-01-01
The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.
Developing High-resolution Soil Database for Regional Crop Modeling in East Africa
NASA Astrophysics Data System (ADS)
Han, E.; Ines, A. V. M.
2014-12-01
The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.
Meier, Benjamin Mason; Cabrera, Oscar A; Ayala, Ana; Gostin, Lawrence O
2012-06-15
The O'Neill Institute for National and Global Health Law at Georgetown University, the World Health Organization, and the Lawyers Collective have come together to develop a searchable Global Health and Human Rights Database that maps the intersection of health and human rights in judgments, international and regional instruments, and national constitutions. Where states long remained unaccountable for violations of health-related human rights, litigation has arisen as a central mechanism in an expanding movement to create rights-based accountability. Facilitated by the incorporation of international human rights standards in national law, this judicial enforcement has supported the implementation of rights-based claims, giving meaning to states' longstanding obligations to realize the highest attainable standard of health. Yet despite these advancements, there has been insufficient awareness of the international and domestic legal instruments enshrining health-related rights and little understanding of the scope and content of litigation upholding these rights. As this accountability movement evolves, the Global Health and Human Rights Database seeks to chart this burgeoning landscape of international instruments, national constitutions, and judgments for health-related rights. Employing international legal research to document and catalogue these three interconnected aspects of human rights for the public's health, the Database's categorization by human rights, health topics, and regional scope provides a comprehensive means of understanding health and human rights law. Through these categorizations, the Global Health and Human Rights Database serves as a basis for analogous legal reasoning across states to serve as precedents for future cases, for comparative legal analysis of similar health claims in different country contexts, and for empirical research to clarify the impact of human rights judgments on public health outcomes. Copyright © 2012 Meier, Nygren-Krug, Cabrera, Ayala, and Gostin.
The inclusion of an online journal in PubMed central - a difficult path.
Grech, Victor
2016-01-01
The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.
Annotations of Mexican bullfighting videos for semantic index
NASA Astrophysics Data System (ADS)
Montoya Obeso, Abraham; Oropesa Morales, Lester Arturo; Fernando Vázquez, Luis; Cocolán Almeda, Sara Ivonne; Stoian, Andrei; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Montiel Perez, Jesús Yalja; de la O Torres, Saul; Ramírez Acosta, Alejandro Alvaro
2015-09-01
The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.
SIRSALE: integrated video database management tools
NASA Astrophysics Data System (ADS)
Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.
2002-07-01
Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.
Smith, Leann V; Blake, Jamilia J; Graves, Scott L; Vaughan-Jensen, Jessica; Pulido, Ryne; Banks, Courtney
2016-09-01
The recruitment of culturally and linguistically diverse students to graduate programs is critical to the overall growth and development of school psychology as a field. Program websites serve as an effective recruitment tool for attracting prospective students, yet there is limited research on how school psychology programs use their websites to recruit diverse students. The purpose of this study was to evaluate whether school psychology program websites include sufficient levels of diversity-related content critical for attracting diverse applicants. The website content of 250 professional psychology programs (165 school psychology training programs and 85 clinical and counseling psychology programs) were examined for the presence of themes of diversity and multiculturalism that prospective racially/ethnically and linguistically diverse students deem important for selecting a graduate program. Results indicated that school psychology programs had less diversity-related content on their program's website relative to clinical and counseling psychology programs.' Implications for improving recruitment of racially/ethnically and linguistically diverse students through websites are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Licensing and Negotiations for Electronic Content
ERIC Educational Resources Information Center
Crawford, Amy R.
2008-01-01
This article provides an overview of the basic characteristics of database, or eContent, license agreements, defines general licensing terms, maps the anatomy of an electronic resources subscription agreement, and discusses negotiating skills and techniques for library staff. (Contains a list of additional resources and a sample agreement.)
The mineral content of tap water in United States households
USDA-ARS?s Scientific Manuscript database
The composition of tap water contributes to dietary intake of minerals. The USDA’s Nutrient Data Laboratory (NDL) conducted a study of the mineral content of residential tap water, to generate current data for the USDA National Nutrient Database. Sodium, potassium, calcium, magnesium, iron, copper...
Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph
2009-01-01
Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796
SeqHound: biological sequence and structure database as a platform for bioinformatics research
2002-01-01
Background SeqHound has been developed as an integrated biological sequence, taxonomy, annotation and 3-D structure database system. It provides a high-performance server platform for bioinformatics research in a locally-hosted environment. Results SeqHound is based on the National Center for Biotechnology Information data model and programming tools. It offers daily updated contents of all Entrez sequence databases in addition to 3-D structural data and information about sequence redundancies, sequence neighbours, taxonomy, complete genomes, functional annotation including Gene Ontology terms and literature links to PubMed. SeqHound is accessible via a web server through a Perl, C or C++ remote API or an optimized local API. It provides functionality necessary to retrieve specialized subsets of sequences, structures and structural domains. Sequences may be retrieved in FASTA, GenBank, ASN.1 and XML formats. Structures are available in ASN.1, XML and PDB formats. Emphasis has been placed on complete genomes, taxonomy, domain and functional annotation as well as 3-D structural functionality in the API, while fielded text indexing functionality remains under development. SeqHound also offers a streamlined WWW interface for simple web-user queries. Conclusions The system has proven useful in several published bioinformatics projects such as the BIND database and offers a cost-effective infrastructure for research. SeqHound will continue to develop and be provided as a service of the Blueprint Initiative at the Samuel Lunenfeld Research Institute. The source code and examples are available under the terms of the GNU public license at the Sourceforge site http://sourceforge.net/projects/slritools/ in the SLRI Toolkit. PMID:12401134
Zeni, Mary Beth
2012-03-01
The purpose of this study was to evaluate if paediatric asthma educational intervention studies included in the Cochrane Collaboration database incorporated concepts of health literacy. Inclusion criteria were established to identify review categories in the Cochrane Collaboration database specific to paediatric asthma educational interventions. Articles that met the inclusion criteria were selected from the Cochrane Collaboration database in 2010. The health literacy definition from Healthy People 2010 was used to develop a 4-point a priori rating scale to determine the extent a study reported aspects of health literacy in the development of an educational intervention for parents and/or children. Five Cochrane review categories met the inclusion criteria; 75 studies were rated for health literacy content regarding educational interventions with families and children living with asthma. A priori criteria were used for the rating process. While 52 (69%) studies had no information pertaining to health literacy, 23 (31%) reported an aspect of health literacy. Although all studies maintained the rigorous standards of randomized clinical trials, a model of health literacy was not reported regarding the design and implementation of interventions. While a more comprehensive health literacy model for the development of educational interventions with families and children may have been available after the reviewed studies were conducted, general literacy levels still could have been addressed. The findings indicate a need to incorporate health literacy in the design of client-centred educational interventions and in the selection criteria of relevant Cochrane reviews. Inclusion assures that health literacy is as important as randomization and statistical analyses in the research design of educational interventions and may even assure participation of people with literacy challenges. © 2012 The Author. International Journal of Evidence-Based Healthcare © 2012 The Joanna Briggs Institute.
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
Systems and methods for automatically identifying and linking names in digital resources
Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.
2017-06-06
The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.
Staradmin -- Starlink User Database Maintainer
NASA Astrophysics Data System (ADS)
Fish, Adrian
The subject of this SSN is a utility called STARADMIN. This utility allows the system administrator to build and maintain a Starlink User Database (UDB). The principal source of information for each user is a text file, named after their username. The content of each file is a list consisting of one keyword followed by the relevant user data per line. These user database files reside in a single directory. The STARADMIN program is used to manipulate these user data files and automatically generate user summary lists.
Worldwide nanotechnology development: a comparative study of USPTO, EPO, and JPO patents (1976-2004)
NASA Astrophysics Data System (ADS)
Li, Xin; Lin, Yiling; Chen, Hsinchun; Roco, Mihail C.
2007-12-01
To assess worldwide development of nanotechnology, this paper compares the numbers and contents of nanotechnology patents in the United States Patent and Trademark Office (USPTO), European Patent Office (EPO), and Japan Patent Office (JPO). It uses the patent databases as indicators of nanotechnology trends via bibliographic analysis, content map analysis, and citation network analysis on nanotechnology patents per country, institution, and technology field. The numbers of nanotechnology patents published in USPTO and EPO have continued to increase quasi-exponentially since 1980, while those published in JPO stabilized after 1993. Institutions and individuals located in the same region as a repository's patent office have a higher contribution to the nanotechnology patent publication in that repository ("home advantage" effect). The USPTO and EPO databases had similar high-productivity contributing countries and technology fields with large number of patents, but quite different high-impact countries and technology fields after the average number of received cites. Bibliographic analysis on USPTO and EPO patents shows that researchers in the United States and Japan published larger numbers of patents than other countries, and that their patents were more frequently cited by other patents. Nanotechnology patents covered physics research topics in all three repositories. In addition, USPTO showed the broadest representation in coverage in biomedical and electronics areas. The analysis of citations by technology field indicates that USPTO had a clear pattern of knowledge diffusion from highly cited fields to less cited fields, while EPO showed knowledge exchange mainly occurred among highly cited fields.
DOE technology information management system database study report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widing, M.A.; Blodgett, D.W.; Braun, M.D.
1994-11-01
To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less
Information, intelligence, and interface: the pillars of a successful medical information system.
Hadzikadic, M; Harrington, A L; Bohren, B F
1995-01-01
This paper addresses three key issues facing developers of clinical and/or research medical information systems. 1. INFORMATION. The basic function of every database is to store information about the phenomenon under investigation. There are many ways to organize information in a computer; however only a few will prove optimal for any real life situation. Computer Science theory has developed several approaches to database structure, with relational theory leading in popularity among end users [8]. Strict conformance to the rules of relational database design rewards the user with consistent data and flexible access to that data. A properly defined database structure minimizes redundancy i.e.,multiple storage of the same information. Redundancy introduces problems when updating a database, since the repeated value has to be updated in all locations--missing even a single value corrupts the whole database, and incorrect reports are produced [8]. To avoid such problems, relational theory offers a formal mechanism for determining the number and content of data files. These files not only preserve the conceptual schema of the application domain, but allow a virtually unlimited number of reports to be efficiently generated. 2. INTELLIGENCE. Flexible access enables the user to harvest additional value from collected data. This value is usually gained via reports defined at the time of database design. Although these reports are indispensable, with proper tools more information can be extracted from the database. For example, machine learning, a sub-discipline of artificial intelligence, has been successfully used to extract knowledge from databases of varying size by uncovering a correlation among fields and records[1-6, 9]. This knowledge, represented in the form of decision trees, production rules, and probabilistic networks, clearly adds a flavor of intelligence to the data collection and manipulation system. 3. INTERFACE. Despite the obvious importance of collecting data and extracting knowledge, current systems often impede these processes. Problems stem from the lack of user friendliness and functionality. To overcome these problems, several features of a successful human-computer interface have been identified [7], including the following "golden" rules of dialog design [7]: consistency, use of shortcuts for frequent users, informative feedback, organized sequence of actions, simple error handling, easy reversal of actions, user-oriented focus of control, and reduced short-term memory load. To this list of rules, we added visual representation of both data and query results, since our experience has demonstrated that users react much more positively to visual rather than textual information. In our design of the Orthopaedic Trauma Registry--under development at the Carolinas Medical Center--we have made every effort to follow the above rules. The results were rewarding--the end users actually not only want to use the product, but also to participate in its development.
NASA Technical Reports Server (NTRS)
Mallasch, Paul G.
1993-01-01
This volume contains the complete software system documentation for the Federal Communications Commission (FCC) Transponder Loading Data Conversion Software (FIX-FCC). This software was written to facilitate the formatting and conversion of FCC Transponder Occupancy (Loading) Data before it is loaded into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). The information that FCC supplies NASA is in report form and must be converted into a form readable by the database management software used in the GSOSTATS application. Both the User's Guide and Software Maintenance Manual are contained in this document. This volume of documentation passed an independent quality assurance review and certification by the Product Assurance and Security Office of the Planning Research Corporation (PRC). The manuals were reviewed for format, content, and readability. The Software Management and Assurance Program (SMAP) life cycle and documentation standards were used in the development of this document. Accordingly, these standards were used in the review. Refer to the System/Software Test/Product Assurance Report for the Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS) for additional information.
ACLAME: a CLAssification of Mobile genetic Elements, update 2010.
Leplae, Raphaël; Lima-Mendez, Gipsi; Toussaint, Ariane
2010-01-01
The ACLAME database is dedicated to the collection, analysis and classification of sequenced mobile genetic elements (MGEs, in particular phages and plasmids). In addition to providing information on the MGEs content, classifications are available at various levels of organization. At the gene/protein level, families group similar sequences that are expected to share the same function. Families of four or more proteins are manually assigned with a functional annotation using the GeneOntology and the locally developed ontology MeGO dedicated to MGEs. At the genome level, evolutionary cohesive modules group sets of protein families shared among MGEs. At the population level, networks display the reticulate evolutionary relationships among MGEs. To increase the coverage of the phage sequence space, ACLAME version 0.4 incorporates 760 high-quality predicted prophages selected from the Prophinder database. Most of the data can be downloaded from the freely accessible ACLAME web site (http://aclame.ulb.ac.be). The BLAST interface for querying the database has been extended and numerous tools for in-depth analysis of the results have been added.
Logic programming to infer complex RNA expression patterns from RNA-seq data.
Weirick, Tyler; Militello, Giuseppe; Ponomareva, Yuliya; John, David; Döring, Claudia; Dimmeler, Stefanie; Uchida, Shizuka
2018-03-01
To meet the increasing demand in the field, numerous long noncoding RNA (lncRNA) databases are available. Given many lncRNAs are specifically expressed in certain cell types and/or time-dependent manners, most lncRNA databases fall short of providing such profiles. We developed a strategy using logic programming to handle the complex organization of organs, their tissues and cell types as well as gender and developmental time points. To showcase this strategy, we introduce 'RenalDB' (http://renaldb.uni-frankfurt.de), a database providing expression profiles of RNAs in major organs focusing on kidney tissues and cells. RenalDB uses logic programming to describe complex anatomy, sample metadata and logical relationships defining expression, enrichment or specificity. We validated the content of RenalDB with biological experiments and functionally characterized two long intergenic noncoding RNAs: LOC440173 is important for cell growth or cell survival, whereas PAXIP1-AS1 is a regulator of cell death. We anticipate RenalDB will be used as a first step toward functional studies of lncRNAs in the kidney.
Huser, Vojtech; Sincan, Murat; Cimino, James J
2014-01-01
Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.
Huser, Vojtech; Sincan, Murat; Cimino, James J
2014-01-01
Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward. PMID:25276091
Recent Developments in the NIST Atomic Databases
NASA Astrophysics Data System (ADS)
Kramida, Alexander
2011-05-01
New versions of the NIST Atomic Spectra Database (ASD, v. 4.0) and three bibliographic databases (Atomic Energy Levels and Spectra, v. 2.0, Atomic Transition Probabilities, v. 9.0, and Atomic Line Broadening and Shapes, v. 3.0) have recently been released. In this contribution I will describe the main changes in the way users get the data through the Web. The contents of ASD have been significantly extended. In particular, the data on highly ionized tungsten (W III-LXXIV) have been added from a recently published NIST compilation. The tables for Fe I and Fe II have been replaced with newer, much more extensive lists (10000 lines for Fe I). The other updated or new spectra include H, D, T, He I-II, Li I-III, Be I-IV, B I-V, C I-II, N I-II, O I-II, Na I-X, K I-XIX, and Hg I. The new version of ASD now incorporates data on isotopes of several elements. I will describe some of the issues the NIST ASD Team faces when updating the data.
Recent Developments in the NIST Atomic Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramida, Alexander
New versions of the NIST Atomic Spectra Database (ASD, v. 4.0) and three bibliographic databases (Atomic Energy Levels and Spectra, v. 2.0, Atomic Transition Probabilities, v. 9.0, and Atomic Line Broadening and Shapes, v. 3.0) have recently been released. In this contribution I will describe the main changes in the way users get the data through the Web. The contents of ASD have been significantly extended. In particular, the data on highly ionized tungsten (W III-LXXIV) have been added from a recently published NIST compilation. The tables for Fe I and Fe II have been replaced with newer, much moremore » extensive lists (10000 lines for Fe I). The other updated or new spectra include H, D, T, He I-II, Li I-III, Be I-IV, B I-V, C I-II, N I-II, O I-II, Na I-X, K I-XIX, and Hg I. The new version of ASD now incorporates data on isotopes of several elements. I will describe some of the issues the NIST ASD Team faces when updating the data.« less
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
Measurement and modeling of unsaturated hydraulic conductivity
Perkins, Kim S.; Elango, Lakshmanan
2011-01-01
The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.
Implementation of a WAP-based telemedicine system for patient monitoring.
Hung, Kevin; Zhang, Yuan-Ting
2003-06-01
Many parties have already demonstrated telemedicine applications that use cellular phones and the Internet. A current trend in telecommunication is the convergence of wireless communication and computer network technologies, and the emergence of wireless application protocol (WAP) devices is an example. Since WAP will also be a common feature found in future mobile communication devices, it is worthwhile to investigate its use in telemedicine. This paper describes the implementation and experiences with a WAP-based telemedicine system for patient-monitoring that has been developed in our laboratory. It utilizes WAP devices as mobile access terminals for general inquiry and patient-monitoring services. Authorized users can browse the patients' general data, monitored blood pressure (BP), and electrocardiogram (ECG) on WAP devices in store-and-forward mode. The applications, written in wireless markup language (WML), WMLScript, and Perl, resided in a content server. A MySQL relational database system was set up to store the BP readings, ECG data, patient records, clinic and hospital information, and doctors' appointments with patients. A wireless ECG subsystem was built for recording ambulatory ECG in an indoor environment and for storing ECG data into the database. For testing, a WAP phone compliant with WAP 1.1 was used at GSM 1800 MHz by circuit-switched data (CSD) to connect to the content server through a WAP gateway, which was provided by a mobile phone service provider in Hong Kong. Data were successfully retrieved from the database and displayed on the WAP phone. The system shows how WAP can be feasible in remote patient-monitoring and patient data retrieval.
Selection. ERIC Processing Manual, Section III.
ERIC Educational Resources Information Center
Brandhorst, Ted, Ed.
Rules and guidelines are provided governing the selection of documents and journal articles to be included in the ERIC database. Selection criteria are described under the five headings: (1) Appropriateness of content/subject matter; (2) Suitability of format, medium, document type; (3) Quality of content; (4) Legibility and reproducibility; (5)…
Content Analysis of the Increasing Trend of Information Brokers.
ERIC Educational Resources Information Center
Wilson, Luellen
This study tracks the increasing use of the term "information broker" in the professional library literature, and looks at the extent to which the professional literature reflects the increasing trend of information commodification. The content of three online databases was analyzed: LIBRARY LITERATURE, LISA, and ABI/INFORM. Four terms…
PIRIA: a general tool for indexing, search, and retrieval of multimedia content
NASA Astrophysics Data System (ADS)
Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.
2004-05-01
The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.
The Development of Vocational Vehicle Drive Cycles and Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Adam W.; Phillips, Caleb T.; Konan, Arnaud M.
Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize the on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNAmore » database. The Fleet DNA database contains millions of miles of historical real-world drive cycle data captured from medium- and heavy vehicles operating across the United States. The data encompass data from existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topology ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. The range of fleets, geographic locations, and total number of vehicles analyzed ensures results that include the influence of these factors. While no analysis will be perfect without unlimited resources and data, it is the researchers understanding that the Fleet DNA database is the largest and most thorough publicly accessible vocational vehicle usage database currently in operation. This report includes an introduction to the Fleet DNA database and the data contained within, a presentation of the results of the statistical analysis performed by NREL, review of the logistic model developed to predict cluster membership, and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work are also included in the report content.« less
Farina-Henry, Eva; Waterston, Leo B; Blaisdell, Laura L
2015-07-22
This paper presents the first formal evaluation of social media (SM) use in the National Children's Study (NCS). The NCS is a prospective, longitudinal study of the effects of environment and genetics on children's health, growth and development. The Study employed a multifaceted community outreach campaign in combination with a SM campaign to educate participants and their communities about the Study. SM essentially erases geographic differences between people due to its omnipresence, which was an important consideration in this multi-site national study. Using SM in the research setting requires an understanding of potential threats to confidentiality and privacy and the role that posted content plays as an extension of the informed consent process. This pilot demonstrates the feasibility of creating linkages and databases to measure and compare SM with new content and engagement metrics. Metrics presented include basic use metrics for Facebook as well as newly created metrics to assist with Facebook content and engagement analyses. Increasing Likes per month demonstrates that online communities can be quickly generated. Content and Engagement analyses describe what content of posts NCS Study Centers were using, what content they were posting about, and what the online NCS communities found most engaging. These metrics highlight opportunities to optimize time and effort while determining the content of future posts. Further research about content analysis, optimal metrics to describe engagement in research, the role of localized content and stakeholders, and social media use in participant recruitment is warranted.
Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela
2015-03-01
Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.
ERIC Educational Resources Information Center
Nemeth, Erik
2010-01-01
Discovery of academic literature through Web search engines challenges the traditional role of specialized research databases. Creation of literature outside academic presses and peer-reviewed publications expands the content for scholarly research within a particular field. The resulting body of literature raises the question of whether scholars…
ERIC Educational Resources Information Center
Fagan, Judy Condit
2001-01-01
Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)
ERIC Educational Resources Information Center
Jukic, Nenad; Gray, Paul
2008-01-01
This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…