Sample records for local relational database

  1. Adding glycaemic index and glycaemic load functionality to DietPLUS, a Malaysian food composition database and diet intake calculator.

    PubMed

    Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah

    2012-01-01

    This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.

  2. Integrating a local database into the StarView distributed user interface

    NASA Technical Reports Server (NTRS)

    Silberberg, D. P.

    1992-01-01

    A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.

  3. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    NASA Astrophysics Data System (ADS)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  4. BIOSPIDA: A Relational Database Translator for NCBI.

    PubMed

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  5. Development of a database system for near-future climate change projections under the Japanese National Project SI-CAT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.

    2017-12-01

    Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.

  6. BIOSPIDA: A Relational Database Translator for NCBI

    PubMed Central

    Hagen, Matthew S.; Lee, Eva K.

    2010-01-01

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013

  7. Local television news reporting of kidney disease.

    PubMed

    Jaffery, Jonathan B; Jacobson, Lynn M; Goldstein, Kenneth M; Pribble, James M

    2006-12-01

    Local television is the primary news source for the majority of Americans. This study aims to describe how local news reports on kidney disease. Using our searchable database of health-related late local news segments from 2002, we identified stories with the key words kidney, hypertension, blood pressure, or diabetes. This database is a representative sample of the late local news on 122 stations in the 50 largest US media markets, comprising 60% of the population. The content of each identified story was reviewed to determine whether it mentioned: (1) chronic kidney disease (CKD), (2) screening for kidney disease, or (3) kidney disease as a potential complication (for blood pressure- or diabetes-related stories). Only 2 of 1,799 database news stories (0.11%) included "kidney" as a summary key word; neither referred to CKD, screening, or complications of other diseases. Of 19 stories about hypertension or blood pressure (1.06% of all stories) and the 14 stories about diabetes (0.78% of all stories), none mentioned these criteria. Despite efforts to increase public awareness of and screening for CKD, local television news (the most important news source for a majority of Americans) did little to help achieve these goals. Further work will be needed to confirm whether this paucity of coverage varies over time and determine why so little attention is given to CKD. Educating physicians and public relations personnel who advocate for kidney disease about journalists' needs may be an important step to help advance public awareness of CKD.

  8. Effects of soil water holding capacity on evapotranspiration and irrigation scheduling

    USDA-ARS?s Scientific Manuscript database

    The USDA Natural Resources Conservation Service (NRCS), through the National Cooperative Soil Survey, developed three soil geographic databases that are appropriate for acquiring soil information at the national, regional, and local scales. These relational databases include the National Soil Geogra...

  9. Surveillance of obesity-related policies in multiple environments: the Missouri Obesity, Nutrition, and Activity Policy Database, 2007-2009.

    PubMed

    Haire-Joshu, Debra; Elliott, Michael; Schermbeck, Rebecca; Taricone, Elsa; Green, Scoie; Brownson, Ross C

    2010-07-01

    The objective of this study was to develop the Missouri Obesity, Nutrition, and Activity Policy Database, a geographically representative baseline of Missouri's existing obesity-related local policies on healthy eating and physical activity. The database is organized to reflect 7 local environments (government, community, health care, worksite, school, after school, and child care) and to describe the prevalence of obesity-related policies in these environments. We employed a stratified nested cluster design using key informant interviews and review of public records to sample 2,356 sites across the 7 target environments for the presence or absence of obesity-related policies. The school environment had the most policies (88%), followed by after school (47%) and health care (32%). Community, government, and child care environments reported smaller proportions of obesity-related policies but higher rates of funding for these policies. Worksite environments had low numbers of obesity-related policies and low funding levels (17% and 6%, respectively). Sixteen of the sampled counties had high obesity-related policy occurrence; 65 had moderate and 8 had low occurrences. Except in Missouri schools, the presence of obesity-related policies is limited. More obesity-related policies are needed so that people have access to environments that support the model behaviors necessary to halt the obesity epidemic. The Missouri Obesity, Nutrition, and Activity Policy Database provides a benchmark for evaluating progress toward the development of obesity-related policies across multiple environments in Missouri.

  10. Interconnecting heterogeneous database management systems

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  11. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    PubMed

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  12. New tools and methods for direct programmatic access to the dbSNP relational database.

    PubMed

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  13. Graphical tool for navigation within the semantic network of the UMLS metathesaurus on a locally installed database.

    PubMed

    Frankewitsch, T; Prokosch, H U

    2000-01-01

    Knowledge in the environment of information technologies is bound to structured vocabularies. Medical data dictionaries are necessary for uniquely describing findings like diagnoses, procedures or functions. Therefore we decided to locally install a version of the Unified Medical Language System (UMLS) of the U.S. National Library of Medicine as a repository for defining entries of a medical multimedia database. Because of the requirement to extend the vocabulary in concepts and relations between existing concepts a graphical tool for appending new items to the database has been developed: Although the database is an instance of a semantic network the focus on single entries offers the opportunity of reducing the net to a tree within this detail. Based on the graph theorem, there are definitions of nodes of concepts and nodes of knowledge. The UMLS additionally offers the specification of sub-relations, which can be represented, too. Using this view it is possible to manage these 1:n-Relations in a simple tree view. On this background an explorer like graphical user interface has been realised to add new concepts and define new relationships between those and existing entries for adapting the UMLS for specific purposes such as describing medical multimedia objects.

  14. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  15. New tools and methods for direct programmatic access to the dbSNP relational database

    PubMed Central

    Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260

  16. Task III: Development of an Effective Computational Methodology for Body Force Representation of High-speed Rotor 37

    NASA Technical Reports Server (NTRS)

    Tan, Choon-Sooi; Suder, Kenneth (Technical Monitor)

    2003-01-01

    A framework for an effective computational methodology for characterizing the stability and the impact of distortion in high-speed multi-stage compressor is being developed. The methodology consists of using a few isolated-blade row Navier-Stokes solutions for each blade row to construct a body force database. The purpose of the body force database is to replace each blade row in a multi-stage compressor by a body force distribution to produce same pressure rise and flow turning. To do this, each body force database is generated in such a way that it can respond to the changes in local flow conditions. Once the database is generated, no hrther Navier-Stokes computations are necessary. The process is repeated for every blade row in the multi-stage compressor. The body forces are then embedded as source terms in an Euler solver. The method is developed to have the capability to compute the performance in a flow that has radial as well as circumferential non-uniformity with a length scale larger than a blade pitch; thus it can potentially be used to characterize the stability of a compressor under design. It is these two latter features as well as the accompanying procedure to obtain the body force representation that distinguish the present methodology from the streamline curvature method. The overall computational procedures have been developed. A dimensional analysis was carried out to determine the local flow conditions for parameterizing the magnitudes of the local body force representation of blade rows. An Euler solver was modified to embed the body forces as source terms. The results from the dimensional analysis show that the body forces can be parameterized in terms of the two relative flow angles, the relative Mach number, and the Reynolds number. For flow in a high-speed transonic blade row, they can be parameterized in terms of the local relative Mach number alone.

  17. Experiment on building Sundanese lexical database based on WordNet

    NASA Astrophysics Data System (ADS)

    Dewi Budiwati, Sari; Nurani Setiawan, Novihana

    2018-03-01

    Sundanese language is the second biggest local language used in Indonesia. Currently, Sundanese language is rarely used since we have the Indonesian language in everyday conversation and as the national language. We built a Sundanese lexical database based on WordNet and Indonesian WordNet as an alternative way to preserve the language as one of local culture. WordNet was chosen because of Sundanese language has three levels of word delivery, called language code of conduct. Web user participant involved in this research for specifying Sundanese semantic relations, and an expert linguistic for validating the relations. The merge methodology was implemented in this experiment. Some words are equivalent with WordNet while another does not have its equivalence since some words are not exist in another culture.

  18. Fish Karyome: A karyological information network database of Indian Fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  19. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.

  20. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  1. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  2. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  3. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  4. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  5. Marfan Database (second edition): software and database for the analysis of mutations in the human FBN1 gene.

    PubMed Central

    Collod-Béroud, G; Béroud, C; Adès, L; Black, C; Boxer, M; Brock, D J; Godfrey, M; Hayward, C; Karttunen, L; Milewicz, D; Peltonen, L; Richards, R I; Wang, M; Junien, C; Boileau, C

    1997-01-01

    Fibrillin is the major component of extracellular microfibrils. Mutations in the fibrillin gene on chromosome 15 (FBN1) were described at first in the heritable connective tissue disorder, Marfan syndrome (MFS). More recently, FBN1 has also been shown to harbor mutations related to a spectrum of conditions phenotypically related to MFS. These mutations are private, essentially missense, generally non-recurrent and widely distributed throughout the gene. To date no clear genotype/phenotype relationship has been observed excepted for the localization of neonatal mutations in a cluster between exons 24 and 32. The second version of the computerized Marfan database contains 89 entries. The software has been modified to accomodate new functions and routines. PMID:9016526

  6. Using non-local databases for the environmental assessment of industrial activities: The case of Latin America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osses de Eicker, Margarita, E-mail: Margarita.Osses@empa.c; Hischier, Roland, E-mail: Roland.Hischier@empa.c; Hurni, Hans, E-mail: Hans.Hurni@cde.unibe.c

    2010-04-15

    Nine non-local databases were evaluated with respect to their suitability for the environmental assessment of industrial activities in Latin America. Three assessment methods were considered, namely Life Cycle Assessment (LCA), Environmental Impact Assessment (EIA) and air emission inventories. The analysis focused on data availability in the databases and the applicability of their international data to Latin American industry. The study showed that the European EMEP/EEA Guidebook and the U.S. EPA AP-42 database are the most suitable ones for air emission inventories, whereas the LCI database Ecoinvent is the most suitable one for LCA and EIA. Due to the data coveragemore » in the databases, air emission inventories are easier to develop than LCA or EIA, which require more comprehensive information. One strategy to overcome the limitations of non-local databases for Latin American industry is the combination of validated data from international databases with newly developed local datasets.« less

  7. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  8. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  9. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  10. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  11. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  12. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  13. Rapid development of entity-based data models for bioinformatics with persistence object-oriented design and structured interfaces.

    PubMed

    Ezra Tsur, Elishai

    2017-01-01

    Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.

  14. EPA GHG Certification of Medium- and Heavy-Duty Vehicles: Development of Road Grade Profiles Representative of US Controlled Access Highways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Duran, Adam; Burton, Evan

    This report includes a detailed comparison of the TomTom national road grade database relative to a local road grade dataset generated by Southwest Research Institute and a national elevation dataset publically available from the U.S. Geological Survey. This analysis concluded that the TomTom national road grade database was a suitable source of road grade data for purposes of this study.

  15. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  16. Monitoring outcomes with relational databases: does it improve quality of care?

    PubMed

    Clemmer, Terry P

    2004-12-01

    There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.

  17. Accessing the public MIMIC-II intensive care relational database for clinical research.

    PubMed

    Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G

    2013-01-10

    The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.

  18. Archetype relational mapping - a practical openEHR persistence solution.

    PubMed

    Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong

    2015-11-05

    One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.

  19. New Dimensions for the Online Catalog: The Dartmouth College Library Experience [and] TOC/DOC at Caltech: Evolution of Citation Access Online [and] Locally Loaded Databases in Arizona State University's Online Catalog Using the CARL System.

    ERIC Educational Resources Information Center

    Klemperer, Katharina; And Others

    1989-01-01

    Each of three articles describes an academic library's online catalog that includes locally created databases. Topics covered include database and software selection; systems design and development; database producer negotiations; problems encountered during implementation; database loading; training and documentation; and future plans. (CLB)

  20. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  1. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    PubMed

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  2. Risk of cardiac death among cancer survivors in the United States: a SEER database analysis.

    PubMed

    Abdel-Rahman, Omar

    2017-09-01

    Population-based data on the risk of cardiac death among cancer survivors are needed. This scenario was evaluated in cancer survivors (>5 years) registered within the Surveillance, Epidemiology and End Results (SEER) database. The SEER database was queried using SEER*Stat to determine the frequency of cardiac death compared to other causes of death; and to determine heart disease-specific and cancer-specific survival rates in survivors of each of the 10 most common cancers in men and women in the SEER database. For cancer-specific survival rate, the highest rates were related to thyroid cancer survivors; while the lowest rates were related to lung cancer survivors. For heart disease-specific survival rate, the highest rates were related to thyroid cancer survivors; while the lowest rates were related to both lung cancer survivors and urinary bladder cancer survivors. The following factors were associated with a higher likelihood of cardiac death: male gender, old age at diagnosis, black race and local treatment with radiotherapy rather than surgery (P < 0.0001 for all parameters). Among cancer survivors (>5 years), cardiac death is a significant cause of death and there is a wide variability among different cancers in the relative importance of cardiac death vs. cancer-related death.

  3. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  4. ProBiS-database: precalculated binding site similarities and local pairwise alignments of PDB structures.

    PubMed

    Konc, Janez; Cesnik, Tomo; Konc, Joanna Trykowska; Penca, Matej; Janežič, Dušanka

    2012-02-27

    ProBiS-Database is a searchable repository of precalculated local structural alignments in proteins detected by the ProBiS algorithm in the Protein Data Bank. Identification of functionally important binding regions of the protein is facilitated by structural similarity scores mapped to the query protein structure. PDB structures that have been aligned with a query protein may be rapidly retrieved from the ProBiS-Database, which is thus able to generate hypotheses concerning the roles of uncharacterized proteins. Presented with uncharacterized protein structure, ProBiS-Database can discern relationships between such a query protein and other better known proteins in the PDB. Fast access and a user-friendly graphical interface promote easy exploration of this database of over 420 million local structural alignments. The ProBiS-Database is updated weekly and is freely available online at http://probis.cmm.ki.si/database.

  5. 47 CFR 52.26 - NANC Recommendations on Local Number Portability Administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... perform a database query to determine if the telephone number has been ported to another local exchange carrier, the local exchange carrier may block the unqueried call only if performing the database query is... manage and oversee the local number portability administrators, subject to review by the NANC, but only...

  6. 47 CFR 52.26 - NANC Recommendations on Local Number Portability Administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... perform a database query to determine if the telephone number has been ported to another local exchange carrier, the local exchange carrier may block the unqueried call only if performing the database query is... manage and oversee the local number portability administrators, subject to review by the NANC, but only...

  7. Maximizing the use of Special Olympics International's Healthy Athletes database: A call to action.

    PubMed

    Lloyd, Meghann; Foley, John T; Temple, Viviene A

    2018-02-01

    There is a critical need for high-quality population-level data related to the health of individuals with intellectual disabilities. For more than 15 years Special Olympics International has been conducting free Healthy Athletes screenings at local, national and international events. The Healthy Athletes database is the largest known international database specifically on the health of people with intellectual disabilities; however, it is relatively under-utilized by the research community. A consensus meeting with two dozen North American researchers, stakeholders, clinicians and policymakers took place in Toronto, Canada. The purpose of the meeting was to: 1) establish the perceived utility of the database, and 2) to identify and prioritize 3-5 specific priorities related to using the Healthy Athletes database to promote the health of individuals with intellectual disabilities. There was unanimous agreement from the meeting participants that this database represents an immense opportunity both from the data already collected, and data that will be collected in the future. The 3 top priorities for the database were deemed to be: 1) establish the representativeness of data collected on Special Olympics athletes compared to the general population with intellectual disabilities, 2) create a scientific advisory group for Special Olympics International, and 3) use the data to improve Special Olympics programs around the world. The Special Olympics Healthy Athletes database includes data not found in any other source and should be used, in partnership with Special Olympics International, by researchers to significantly increase our knowledge and understanding of the health of individuals with intellectual disabilities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Accessing the public MIMIC-II intensive care relational database for clinical research

    PubMed Central

    2013-01-01

    Background The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. Results QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge “Predicting mortality of ICU Patients”. Conclusions QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database. PMID:23302652

  9. LocSigDB: a database of protein localization signals

    PubMed Central

    Negi, Simarjeet; Pandey, Sanjit; Srinivasan, Satish M.; Mohammed, Akram; Guda, Chittibabu

    2015-01-01

    LocSigDB (http://genome.unmc.edu/LocSigDB/) is a manually curated database of experimental protein localization signals for eight distinct subcellular locations; primarily in a eukaryotic cell with brief coverage of bacterial proteins. Proteins must be localized at their appropriate subcellular compartment to perform their desired function. Mislocalization of proteins to unintended locations is a causative factor for many human diseases; therefore, collection of known sorting signals will help support many important areas of biomedical research. By performing an extensive literature study, we compiled a collection of 533 experimentally determined localization signals, along with the proteins that harbor such signals. Each signal in the LocSigDB is annotated with its localization, source, PubMed references and is linked to the proteins in UniProt database along with the organism information that contain the same amino acid pattern as the given signal. From LocSigDB webserver, users can download the whole database or browse/search for data using an intuitive query interface. To date, LocSigDB is the most comprehensive compendium of protein localization signals for eight distinct subcellular locations. Database URL: http://genome.unmc.edu/LocSigDB/ PMID:25725059

  10. Managing Automation: A Process, Not a Project.

    ERIC Educational Resources Information Center

    Hoffmann, Ellen

    1988-01-01

    Discussion of issues in management of library automation includes: (1) hardware, including systems growth and contracts; (2) software changes, vendor relations, local systems, and microcomputer software; (3) item and authority databases; (4) automation and library staff, organizational structure, and managing change; and (5) environmental issues,…

  11. Analysis and Exchange of Multimedia Laboratory Data Using the Brain Database

    PubMed Central

    Wertheim, Steven L.

    1990-01-01

    Two principal goals of the Brain Database are: 1) to support laboratory data collection and analysis of multimedia information about the nervous system and 2) to support exchange of these data among researchers and clinicians who may be physically distant. This has been achieved by an implementation of experimental and clinical records within a relational database. An Image Series Editor has been created that provides a graphical interface to these data for the purposes of annotation, quantification and other analyses. Cooperating laboratories each maintain their own copies of the Brain Database to which they may add private data. Although the data in a given experimental or patient record will be distributed among many tables and external image files, the user can treat each record as a unit that can be extracted from the local database and sent to a distant colleague.

  12. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  13. How effective are DNA barcodes in the identification of African rainforest trees?

    PubMed

    Parmentier, Ingrid; Duminil, Jérôme; Kuzmina, Maria; Philippe, Morgane; Thomas, Duncan W; Kenfack, David; Chuyong, George B; Cruaud, Corinne; Hardy, Olivier J

    2013-01-01

    DNA barcoding of rain forest trees could potentially help biologists identify species and discover new ones. However, DNA barcodes cannot always distinguish between closely related species, and the size and completeness of barcode databases are key parameters for their successful application. We test the ability of rbcL, matK and trnH-psbA plastid DNA markers to identify rain forest trees at two sites in Atlantic central Africa under the assumption that a database is exhaustive in terms of species content, but not necessarily in terms of haplotype diversity within species. We assess the accuracy of identification to species or genus using a genetic distance matrix between samples either based on a global multiple sequence alignment (GD) or on a basic local alignment search tool (BLAST). Where a local database is available (within a 50 ha plot), barcoding was generally reliable for genus identification (95-100% success), but less for species identification (71-88%). Using a single marker, best results for species identification were obtained with trnH-psbA. There was a significant decrease of barcoding success in species-rich clades. When the local database was used to identify the genus of trees from another region and did include all genera from the query individuals but not all species, genus identification success decreased to 84-90%. The GD method performed best but a global multiple sequence alignment is not applicable on trnH-psbA. Barcoding is a useful tool to assign unidentified African rain forest trees to a genus, but identification to a species is less reliable, especially in species-rich clades, even using an exhaustive local database. Combining two markers improves the accuracy of species identification but it would only marginally improve genus identification. Finally, we highlight some limitations of the BLAST algorithm as currently implemented and suggest possible improvements for barcoding applications.

  14. How Effective Are DNA Barcodes in the Identification of African Rainforest Trees?

    PubMed Central

    Parmentier, Ingrid; Duminil, Jérôme; Kuzmina, Maria; Philippe, Morgane; Thomas, Duncan W.; Kenfack, David; Chuyong, George B.; Cruaud, Corinne; Hardy, Olivier J.

    2013-01-01

    Background DNA barcoding of rain forest trees could potentially help biologists identify species and discover new ones. However, DNA barcodes cannot always distinguish between closely related species, and the size and completeness of barcode databases are key parameters for their successful application. We test the ability of rbcL, matK and trnH-psbA plastid DNA markers to identify rain forest trees at two sites in Atlantic central Africa under the assumption that a database is exhaustive in terms of species content, but not necessarily in terms of haplotype diversity within species. Methodology/Principal Findings We assess the accuracy of identification to species or genus using a genetic distance matrix between samples either based on a global multiple sequence alignment (GD) or on a basic local alignment search tool (BLAST). Where a local database is available (within a 50 ha plot), barcoding was generally reliable for genus identification (95–100% success), but less for species identification (71–88%). Using a single marker, best results for species identification were obtained with trnH-psbA. There was a significant decrease of barcoding success in species-rich clades. When the local database was used to identify the genus of trees from another region and did include all genera from the query individuals but not all species, genus identification success decreased to 84–90%. The GD method performed best but a global multiple sequence alignment is not applicable on trnH-psbA. Conclusions/Significance Barcoding is a useful tool to assign unidentified African rain forest trees to a genus, but identification to a species is less reliable, especially in species-rich clades, even using an exhaustive local database. Combining two markers improves the accuracy of species identification but it would only marginally improve genus identification. Finally, we highlight some limitations of the BLAST algorithm as currently implemented and suggest possible improvements for barcoding applications. PMID:23565134

  15. sRNAdb: A small non-coding RNA database for gram-positive bacteria

    PubMed Central

    2012-01-01

    Background The class of small non-coding RNA molecules (sRNA) regulates gene expression by different mechanisms and enables bacteria to mount a physiological response due to adaptation to the environment or infection. Over the last decades the number of sRNAs has been increasing rapidly. Several databases like Rfam or fRNAdb were extended to include sRNAs as a class of its own. Furthermore new specialized databases like sRNAMap (gram-negative bacteria only) and sRNATarBase (target prediction) were established. To the best of the authors’ knowledge no database focusing on sRNAs from gram-positive bacteria is publicly available so far. Description In order to understand sRNA’s functional and phylogenetic relationships we have developed sRNAdb and provide tools for data analysis and visualization. The data compiled in our database is assembled from experiments as well as from bioinformatics analyses. The software enables comparison and visualization of gene loci surrounding the sRNAs of interest. To accomplish this, we use a client–server based approach. Offline versions of the database including analyses and visualization tools can easily be installed locally on the user’s computer. This feature facilitates customized local addition of unpublished sRNA candidates and related information such as promoters or terminators using tab-delimited files. Conclusion sRNAdb allows a user-friendly and comprehensive comparative analysis of sRNAs from available sequenced gram-positive prokaryotic replicons. Offline versions including analysis and visualization tools facilitate complex user specific bioinformatics analyses. PMID:22883983

  16. UniGene Tabulator: a full parser for the UniGene format.

    PubMed

    Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi

    2006-10-15

    UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/

  17. The Magnetics Information Consortium (MagIC)

    NASA Astrophysics Data System (ADS)

    Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.

    2003-12-01

    The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.

  18. Assessment and application of national environmental databases and mapping tools at the local level to two community case studies.

    PubMed

    Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad

    2011-03-01

    Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.

  19. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  20. Adult Nutrition Education Materials. January 1982-October 1988. Quick Bibliography Series.

    ERIC Educational Resources Information Center

    Irving, Holly Berry

    This annotated bibliography of materials available from the National Agricultural Library through interlibrary loan to local libraries focuses on nutrition and dietetics as they relate to physical health and special health problems. The bibliography was derived from online searches of the AGRICOLA database, and materials include audiovisuals,…

  1. The influence of local spring temperature variance on temperature sensitivity of spring phenology.

    PubMed

    Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe

    2014-05-01

    The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies. © 2013 John Wiley & Sons Ltd.

  2. Learning about and Practice of Designing Local Data Bases as an Harmonizing Factor.

    ERIC Educational Resources Information Center

    Neelameghan, A.

    This paper provides information workers with some practical approaches to the design, development, and use of local databases that form components of information storage and retrieval systems (ISR) and of automated library operations. Topics discussed include: (1) course objectives for the design and development of local databases for library and…

  3. The Brainomics/Localizer database.

    PubMed

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Evolution of a Patient Information Management System in a Local Area Network Environment at Loyola University of Chicago Medical Center

    PubMed Central

    Price, Ronald N; Chandrasekhar, Arcot J; Tamirisa, Balaji

    1990-01-01

    The Department of Medicine at Loyola University Medical Center (LUMC) of Chicago has implemented a local area network (LAN) based Patient Information Management System (PIMS) as part of its integrated departmental database management system. PIMS consists of related database applications encompassing demographic information, current medications, problem lists, clinical data, prior events, and on-line procedure results. Integration into the existing departmental database system permits PIMS to capture and manipulate data in other departmental applications. Standardization of clinical data is accomplished through three data tables that verify diagnosis codes, procedures codes and a standardized set of clinical data elements. The modularity of the system, coupled with standardized data formats, allowed the development of a Patient Information Protocol System (PIPS). PIPS, a userdefinable protocol processor, provides physicians with individualized data entry or review screens customized for their specific research protocols or practice habits. Physician feedback indicates that the PIMS/PIPS combination enhances their ability to collect and review specific patient information by filtering large amount of clinical data.

  5. LAND-deFeND - An innovative database structure for landslides and floods and their consequences.

    PubMed

    Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto

    2018-02-01

    Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods Directives. The LAND-deFeND database structure is open, and freely available from http://geomorphology.irpi.cnr.it/tools. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Real Time Monitor of Grid job executions

    NASA Astrophysics Data System (ADS)

    Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.

    2010-04-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  7. Health information and communication system for emergency management in a developing country, Iran.

    PubMed

    Seyedin, Seyed Hesam; Jamali, Hamid R

    2011-08-01

    Disasters are fortunately rare occurrences. However, accurate and timely information and communication are vital to adequately prepare individual health organizations for such events. The current article investigates the health related communication and information systems for emergency management in Iran. A mixed qualitative and quantitative methodology was used in this study. A sample of 230 health service managers was surveyed using a questionnaire and 65 semi-structured interviews were also conducted with public health and therapeutic affairs managers who were responsible for emergency management. A range of problems were identified including fragmentation of information, lack of local databases, lack of clear information strategy and lack of a formal system for logging disaster related information at regional or local level. Recommendations were made for improving the national emergency management information and communication system. The findings have implications for health organizations in developing and developed countries especially in the Middle East. Creating disaster related information databases, creating protocols and standards, setting an information strategy, training staff and hosting a center for information system in the Ministry of Health to centrally manage and share the data could improve the current information system.

  8. Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction

    NASA Astrophysics Data System (ADS)

    Terlecki, Pawel; Walczak, Krzysztof

    This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.

  9. The FREGAT biobank: a clinico-biological database dedicated to esophageal and gastric cancers.

    PubMed

    Mariette, Christophe; Renaud, Florence; Piessen, Guillaume; Gele, Patrick; Copin, Marie-Christine; Leteurtre, Emmanuelle; Delaeter, Christine; Dib, Malek; Clisant, Stéphanie; Harter, Valentin; Bonnetain, Franck; Duhamel, Alain; Christophe, Véronique; Adenis, Antoine

    2018-02-06

    While the incidence of esophageal and gastric cancers is increasing, the prognosis of these cancers remains bleak. Endoscopy and surgery are the standard treatments for localized tumors, but multimodal treatments, associated chemotherapy, targeted therapies, immunotherapy, radiotherapy, and surgery are needed for the vast majority of patients who present with locally advanced or metastatic disease at diagnosis. Although survival has improved, most patients still present with advanced disease at diagnosis. In addition, most patients exhibit a poor or incomplete response to treatment, experience early recurrence and have an impaired quality of life. Compared with several other cancers, the therapeutic approach is not personalized, and research is much less developed. It is, therefore, urgent to hasten the development of research protocols, and consequently, develop a large, ambitious and innovative tool through which future scientific questions may be answered. This research must be patient-related so that rapid feedback to the bedside is achieved and should aim to identify clinical-, biological- and tumor-related factors that are associated with treatment resistance. Finally, this research should also seek to explain epidemiological and social facets of disease behavior. The prospective FREGAT database, established by the French National Cancer Institute, is focused on adult patients with carcinomas of the esophagus and stomach and on whatever might be the tumor stage or therapeutic strategy. The database includes epidemiological, clinical, and tumor characteristics data as well as follow-up, human and social sciences quality of life data, along with a tumor and serum bank. This innovative method of research will allow for the banking of millions of data for the development of excellent basic, translational and clinical research programs for esophageal and gastric cancer. This will ultimately improve general knowledge of these diseases, therapeutic strategies and patient survival. This database was initially developed in France on a nationwide basis, but currently, the database is available for worldwide contributions with respect to the input of patient data or the request for data for scientific projects. The FREGAT database has a dedicated website ( www.fregat-database.org ) and is registered on the Clinicaltrials.gov site, number NCT 02526095 , since August 8, 2015.

  10. Experimental Investigation on the Mechanical Instability of Superelastic NiTi Shape Memory Alloy

    NASA Astrophysics Data System (ADS)

    Xiao, Yao; Zeng, Pan; Lei, Liping

    2016-09-01

    In this paper, primary attention is paid to the mechanical instability of superelastic NiTi shape memory alloy (SMA) during localized forward transformation at different temperatures. By inhibiting the localized phase transformation, we can obtain the up-down-up mechanical response of NiTi SMA, which is closely related to the intrinsic material softening during localized martensitic transformation. Furthermore, the material parameters of the up-down-up stress-strain curve are extracted, in such a way that this database can be utilized for simulation and validation of the theoretical analysis. It is found that during forward transformation, the upper yield stress, lower yield stress, Maxwell stress, and nucleation stress of NiTi SMA exhibit linear dependence on temperature. The relation between nucleation stress and temperature can be explained by the famous Clausius-Clapeyron equation, while the relation between upper/lower yield stress and temperature lacks theoretical study, which needs further investigation.

  11. Nutrient estimation from an FFQ developed for a black Zimbabwean population

    PubMed Central

    Merchant, Anwar T; Dehghan, Mahshid; Chifamba, Jephat; Terera, Getrude; Yusuf, Salim

    2005-01-01

    Background There is little information in the literature on methods of food composition database development to calculate nutrient intake from food frequency questionnaire (FFQ) data. The aim of this study is to describe the development of an FFQ and a food composition table to calculate nutrient intake in a Black Zimbabwean population. Methods Trained interviewers collected 24-hour dietary recalls (24 hr DR) from high and low income families in urban and rural Zimbabwe. Based on these data and input from local experts we developed an FFQ, containing a list of frequently consumed foods, standard portion sizes, and categories of consumption frequency. We created a food composition table of the foods found in the FFQ so that we could compute nutrient intake. We used the USDA nutrient database as the main resource because it is relatively complete, updated, and easily accessible. To choose the food item in the USDA nutrient database that most closely matched the nutrient content of the local food we referred to a local food composition table. Results Almost all the participants ate sadza (maize porridge) at least 5 times a week, and about half had matemba (fish) and caterpillar more than once a month. Nutrient estimates obtained from the FFQ data by using the USDA and Zimbabwean food composition tables were similar for total energy intake intra class correlation (ICC) = 0.99, and carbohydrate (ICC = 0.99), but different for vitamin A (ICC = 0.53), and total folate (ICC = 0.68). Conclusion We have described a standardized process of FFQ and food composition database development for a Black Zimbabwean population. PMID:16351722

  12. Image-Based Localization for Indoor Environment Using Mobile Phone

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Wang, H.; Zhan, K.; Zhao, J.; Gui, P.; Feng, T.

    2015-05-01

    Real-time indoor localization based on supporting infrastructures like wireless devices and QR codes are usually costly and labor intensive to implement. In this study, we explored a cheap alternative approach based on images for indoor localization. A user can localize him/herself by just shooting a photo of the surrounding indoor environment using the mobile phone. No any other equipment is required. This is achieved by employing image-matching and searching techniques with a dataset of pre-captured indoor images. In the beginning, a database of structured images of the indoor environment is constructed by using image matching and the bundle adjustment algorithm. Then each image's relative pose (its position and orientation) is estimated and the semantic locations of images are tagged. A user's location can then be determined by comparing a photo taken by the mobile phone to the database. This is done by combining quick image searching, matching and the relative orientation. This study also try to explore image acquisition plans and the processing capacity of off-the-shell mobile phones. During the whole pipeline, a collection of indoor images with both rich and poor textures are examined. Several feature detectors are used and compared. Pre-processing of complex indoor photo is also implemented on the mobile phone. The preliminary experimental results prove the feasibility of this method. In the future, we are trying to raise the efficiency of matching between indoor images and explore the fast 4G wireless communication to ensure the speed and accuracy of the localization based on a client-server framework.

  13. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project.

    PubMed

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette; Kone, Mouhamadou; Kőrösi, Ádám; Krauss, Jochen; Kumar, Ajith; Kumar, Raman; Kurz, David J; Kutt, Alex S; Lachat, Thibault; Lantschner, Victoria; Lara, Francisco; Lasky, Jesse R; Latta, Steven C; Laurance, William F; Lavelle, Patrick; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Lehouck, Valérie; Lencinas, María V; Lentini, Pia E; Letcher, Susan G; Li, Qi; Litchwark, Simon A; Littlewood, Nick A; Liu, Yunhui; Lo-Man-Hung, Nancy; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Luskin, Matthew S; MacSwiney G, M Cristina; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Malone, Louise A; Malonza, Patrick K; Malumbres-Olarte, Jagoba; Mandujano, Salvador; Måren, Inger E; Marin-Spiotta, Erika; Marsh, Charles J; Marshall, E J P; Martínez, Eliana; Martínez Pastur, Guillermo; Moreno Mateos, David; Mayfield, Margaret M; Mazimpaka, Vicente; McCarthy, Jennifer L; McCarthy, Kyle P; McFrederick, Quinn S; McNamara, Sean; Medina, Nagore G; Medina, Rafael; Mena, Jose L; Mico, Estefania; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Miranda-Esquivel, Daniel R; Moir, Melinda L; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Mudri-Stojnic, Sonja; Munira, A Nur; Muoñz-Alonso, Antonio; Munyekenye, B F; Naidoo, Robin; Naithani, A; Nakagawa, Michiko; Nakamura, Akihiro; Nakashima, Yoshihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Navarro-Iriarte, Luis; Ndang'ang'a, Paul K; Neuschulz, Eike L; Ngai, Jacqueline T; Nicolas, Violaine; Nilsson, Sven G; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Norton, David A; Nöske, Nicole M; Nowakowski, A Justin; Numa, Catherine; O'Dea, Niall; O'Farrell, Patrick J; Oduro, William; Oertli, Sabine; Ofori-Boateng, Caleb; Oke, Christopher Omamoke; Oostra, Vicencio; Osgathorpe, Lynne M; Otavo, Samuel Eduardo; Page, Navendu V; Paritsis, Juan; Parra-H, Alejandro; Parry, Luke; Pe'er, Guy; Pearman, Peter B; Pelegrin, Nicolás; Pélissier, Raphaël; Peres, Carlos A; Peri, Pablo L; Persson, Anna S; Petanidou, Theodora; Peters, Marcell K; Pethiyagoda, Rohan S; Phalan, Ben; Philips, T Keith; Pillsbury, Finn C; Pincheira-Ulbrich, Jimmy; Pineda, Eduardo; Pino, Joan; Pizarro-Araya, Jaime; Plumptre, A J; Poggio, Santiago L; Politi, Natalia; Pons, Pere; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Rader, Romina; Ramesh, B R; Ramirez-Pinilla, Martha P; Ranganathan, Jai; Rasmussen, Claus; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Rey Benayas, José M; Rey-Velasco, Juan Carlos; Reynolds, Chevonne; Ribeiro, Danilo Bandini; Richards, Miriam H; Richardson, Barbara A; Richardson, Michael J; Ríos, Rodrigo Macip; Robinson, Richard; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rös, Matthias; Rosselli, Loreta; Rossiter, Stephen J; Roth, Dana S; Roulston, T'ai H; Rousseau, Laurent; Rubio, André V; Ruel, Jean-Claude; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Sam, Katerina; Samnegård, Ulrika; Santana, Joana; Santos, Xavier; Savage, Jade; Schellhorn, Nancy A; Schilthuizen, Menno; Schmiedel, Ute; Schmitt, Christine B; Schon, Nicole L; Schüepp, Christof; Schumann, Katharina; Schweiger, Oliver; Scott, Dawn M; Scott, Kenneth A; Sedlock, Jodi L; Seefeldt, Steven S; Shahabuddin, Ghazala; Shannon, Graeme; Sheil, Douglas; Sheldon, Frederick H; Shochat, Eyal; Siebert, Stefan J; Silva, Fernando A B; Simonetti, Javier A; Slade, Eleanor M; Smith, Jo; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Soto Quiroga, Grimaldo; St-Laurent, Martin-Hugues; Starzomski, Brian M; Stefanescu, Constanti; Steffan-Dewenter, Ingolf; Stouffer, Philip C; Stout, Jane C; Strauch, Ayron M; Struebig, Matthew J; Su, Zhimin; Suarez-Rubio, Marcela; Sugiura, Shinji; Summerville, Keith S; Sung, Yik-Hei; Sutrisno, Hari; Svenning, Jens-Christian; Teder, Tiit; Threlfall, Caragh G; Tiitsaar, Anu; Todd, Jacqui H; Tonietto, Rebecca K; Torre, Ignasi; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Uehara-Prado, Marcio; Urbina-Cardona, Nicolas; Vallan, Denis; Vanbergen, Adam J; Vasconcelos, Heraldo L; Vassilev, Kiril; Verboven, Hans A F; Verdasca, Maria João; Verdú, José R; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Virgilio, Massimiliano; Vu, Lien Van; Waite, Edward M; Walker, Tony R; Wang, Hua-Feng; Wang, Yanping; Watling, James I; Weller, Britta; Wells, Konstans; Westphal, Catrin; Wiafe, Edward D; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Wolters, Volkmar; Woodcock, Ben A; Wu, Jihua; Wunderle, Joseph M; Yamaura, Yuichi; Yoshikura, Satoko; Yu, Douglas W; Zaitsev, Andrey S; Zeidler, Juliane; Zou, Fasheng; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.

  14. Can Perceptions of Environmental and Climate Change in Island Communities Assist in Adaptation Planning Locally?

    PubMed

    Aswani, Shankar; Vaccaro, Ismael; Abernethy, Kirsten; Albert, Simon; de Pablo, Javier Fernández-López

    2015-12-01

    Local perceptions of environmental and climate change, as well as associated adaptations made by local populations, are fundamental for designing comprehensive and inclusive mitigation and adaptation plans both locally and nationally. In this paper, we analyze people's perceptions of environmental and climate-related transformations in communities across the Western Solomon Islands through ethnographic and geospatial methods. Specifically, we documented people's observed changes over the past decades across various environmental domains, and for each change, we asked respondents to identify the causes, timing, and people's adaptive responses. We also incorporated this information into a geographical information system database to produce broad-scale base maps of local perceptions of environmental change. Results suggest that people detected changes that tended to be acute (e.g., water clarity, logging intensity, and agricultural diseases). We inferred from these results that most local observations of and adaptations to change were related to parts of environment/ecosystem that are most directly or indirectly related to harvesting strategies. On the other hand, people were less aware of slower insidious/chronic changes identified by scientific studies. For the Solomon Islands and similar contexts in the insular tropics, a broader anticipatory adaptation planning strategy to climate change should include a mix of local scientific studies and local observations of ongoing ecological changes.

  15. Linking Data Access to Geospatial Data Models to Applications at Local to National Scales: The Estuary Data Mapper

    EPA Science Inventory

    The U.S. Environmental Protection Agency (US EPA) is developing e-Estuary, a decision-support system for Clean Water Act applications in coastal management. E-Estuary has three elements: an estuarine geo-referenced relational database, watershed GIS coverages, and tools to suppo...

  16. Better Building Blocks

    Treesearch

    Cheryl R. Renner; Terry K. Haines; Margaret A. Reams

    2010-01-01

    As researchers for the National Database of State and Local Wildfire Mitigation Programs, we began cataloging programs to reduce wildfire risk on private land in 2001. Over the years, more than 250 programs in 35 states were described at www.wildfireprograms.usda.gov, and we wondered about their relative success. Was there one type of program that wildfire managers...

  17. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.

    2015-10-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.

  18. BLAST and FASTA similarity searching for multiple sequence alignment.

    PubMed

    Pearson, William R

    2014-01-01

    BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.

  19. An intermediary's perspective of online databases for local governments

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    Numerous public administration studies have indicated that local government agencies for a variety of reasons lack access to comprehensive information resources; furthermore, such entities are often unwilling or unable to share information regarding their own problem-solving innovations. The NASA/University of Kentucky Technology Applications Program devotes a considerable effort to providing scientific and technical information and assistance to local agencies, relying on its access to over 500 distinct online databases offered by 20 hosts. The author presents a subjective assessment, based on his own experiences, of several databases which may prove useful in obtaining information for this particular end-user community.

  20. SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.

    PubMed

    Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile

    2015-01-01

    In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.

  1. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    USGS Publications Warehouse

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  2. Computational Thermochemistry of Jet Fuels and Rocket Propellants

    NASA Technical Reports Server (NTRS)

    Crawford, T. Daniel

    2002-01-01

    The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.

  3. Atlas - a data warehouse for integrative bioinformatics.

    PubMed

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis

    2005-02-21

    We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/

  4. Atlas – a data warehouse for integrative bioinformatics

    PubMed Central

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis

    2005-01-01

    Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693

  5. Bar Association Database Continues To Grow. Technical Assistance Bulletin No. 10.

    ERIC Educational Resources Information Center

    Koprowski-Moisant, Jane

    As part of the American Bar Association's Special Committee on Youth Education for Citizenship's efforts to assist in the establishment and maintenance of law related education (LRE) projects in every state and local bar association, surveys were mailed to the associations. The results of the survey showed that 49 state bar associations and 133…

  6. ATtRACT-a database of RNA-binding proteins and associated motifs.

    PubMed

    Giudice, Girolamo; Sánchez-Cabo, Fátima; Torroja, Carlos; Lara-Pezzi, Enrique

    2016-01-01

    RNA-binding proteins (RBPs) play a crucial role in key cellular processes, including RNA transport, splicing, polyadenylation and stability. Understanding the interaction between RBPs and RNA is key to improve our knowledge of RNA processing, localization and regulation in a global manner. Despite advances in recent years, a unified non-redundant resource that includes information on experimentally validated motifs, RBPs and integrated tools to exploit this information is lacking. Here, we developed a database named ATtRACT (available athttp://attract.cnic.es) that compiles information on 370 RBPs and 1583 RBP consensus binding motifs, 192 of which are not present in any other database. To populate ATtRACT we (i) extracted and hand-curated experimentally validated data from CISBP-RNA, SpliceAid-F, RBPDB databases, (ii) integrated and updated the unavailable ASD database and (iii) extracted information from Protein-RNA complexes present in Protein Data Bank database through computational analyses. ATtRACT provides also efficient algorithms to search a specific motif and scan one or more RNA sequences at a time. It also allows discoveringde novomotifs enriched in a set of related sequences and compare them with the motifs included in the database.Database URL:http:// attract. cnic. es. © The Author(s) 2016. Published by Oxford University Press.

  7. GenoMycDB: a database for comparative analysis of mycobacterial genes and genomes.

    PubMed

    Catanho, Marcos; Mascarenhas, Daniel; Degrave, Wim; Miranda, Antonio Basílio de

    2006-03-31

    Several databases and computational tools have been created with the aim of organizing, integrating and analyzing the wealth of information generated by large-scale sequencing projects of mycobacterial genomes and those of other organisms. However, with very few exceptions, these databases and tools do not allow for massive and/or dynamic comparison of these data. GenoMycDB (http://www.dbbm.fiocruz.br/GenoMycDB) is a relational database built for large-scale comparative analyses of completely sequenced mycobacterial genomes, based on their predicted protein content. Its central structure is composed of the results obtained after pair-wise sequence alignments among all the predicted proteins coded by the genomes of six mycobacteria: Mycobacterium tuberculosis (strains H37Rv and CDC1551), M. bovis AF2122/97, M. avium subsp. paratuberculosis K10, M. leprae TN, and M. smegmatis MC2 155. The database stores the computed similarity parameters of every aligned pair, providing for each protein sequence the predicted subcellular localization, the assigned cluster of orthologous groups, the features of the corresponding gene, and links to several important databases. Tables containing pairs or groups of potential homologs between selected species/strains can be produced dynamically by user-defined criteria, based on one or multiple sequence similarity parameters. In addition, searches can be restricted according to the predicted subcellular localization of the protein, the DNA strand of the corresponding gene and/or the description of the protein. Massive data search and/or retrieval are available, and different ways of exporting the result are offered. GenoMycDB provides an on-line resource for the functional classification of mycobacterial proteins as well as for the analysis of genome structure, organization, and evolution.

  8. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  9. How to explain variations in sea cliff erosion rate?

    NASA Astrophysics Data System (ADS)

    Prémaillon, Melody; Regard, Vincent; Dewez, Thomas

    2017-04-01

    Every rocky coast of the world is eroding at different rate (cliff retreat rates). Erosion is caused by a complex interaction of multiple sea weather factors. While numerous local studies exist and explain erosion processes on specific sites, global studies lack. We started to compile many of those local studies and analyse their results with a global point of view in order to quantify the various parameters influencing erosion rates. In other words: is erosion more important in energetic seas? Are chalk cliff eroding faster in rainy environment? etc. In order to do this, we built a database based on literature and national erosion databases. It now contains 80 publications which represents 2500 cliffs studied and more than 3500 erosion rate estimates. A statistical analysis was conducted on this database. On a first approximation, cliff lithology is the only clear signal explaining erosion rate variation: hard lithologies are eroding at 1cm/y or less, whereas unconsolidated lithologies commonly erode faster than 10cm/y. No clear statistical relation were found between erosion rate and external parameters such as sea energy (swell, tide) or weather condition, even on cliff with similar lithology.

  10. Application of cloud database in the management of clinical data of patients with skin diseases.

    PubMed

    Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning

    2015-04-01

    To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.

  11. The salinity tolerant poplar database (STPD): a comprehensive database for studying tree salt-tolerant adaption and poplar genomics.

    PubMed

    Ma, Yazhen; Xu, Ting; Wan, Dongshi; Ma, Tao; Shi, Sheng; Liu, Jianquan; Hu, Quanjun

    2015-03-17

    Soil salinity is a significant factor that impairs plant growth and agricultural productivity, and numerous efforts are underway to enhance salt tolerance of economically important plants. Populus species are widely cultivated for diverse uses. Especially, they grow in different habitats, from salty soil to mesophytic environment, and are therefore used as a model genus for elucidating physiological and molecular mechanisms of stress tolerance in woody plants. The Salinity Tolerant Poplar Database (STPD) is an integrative database for salt-tolerant poplar genome biology. Currently the STPD contains Populus euphratica genome and its related genetic resources. P. euphratica, with a preference of the salty habitats, has become a valuable genetic resource for the exploitation of tolerance characteristics in trees. This database contains curated data including genomic sequence, genes and gene functional information, non-coding RNA sequences, transposable elements, simple sequence repeats and single nucleotide polymorphisms information of P. euphratica, gene expression data between P. euphratica and Populus tomentosa, and whole-genome alignments between Populus trichocarpa, P. euphratica and Salix suchowensis. The STPD provides useful searching and data mining tools, including GBrowse genome browser, BLAST servers and genome alignments viewer, which can be used to browse genome regions, identify similar sequences and visualize genome alignments. Datasets within the STPD can also be downloaded to perform local searches. A new Salinity Tolerant Poplar Database has been developed to assist studies of salt tolerance in trees and poplar genomics. The database will be continuously updated to incorporate new genome-wide data of related poplar species. This database will serve as an infrastructure for researches on the molecular function of genes, comparative genomics, and evolution in closely related species as well as promote advances in molecular breeding within Populus. The STPD can be accessed at http://me.lzu.edu.cn/stpd/ .

  12. Severe Weather Forecast Decision Aid

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Wheeler, Mark M.; Short, David A.

    2005-01-01

    This report presents a 15-year climatological study of severe weather events and related severe weather atmospheric parameters. Data sources included local forecast rules, archived sounding data, Cloud-to-Ground Lightning Surveillance System (CGLSS) data, surface and upper air maps, and two severe weather event databases covering east-central Florida. The local forecast rules were used to set threat assessment thresholds for stability parameters that were derived from the sounding data. The severe weather events databases were used to identify days with reported severe weather and the CGLSS data was used to differentiate between lightning and non-lightning days. These data sets provided the foundation for analyzing the stability parameters and synoptic patterns that were used to develop an objective tool to aid in forecasting severe weather events. The period of record for the analysis was May - September, 1989 - 2003. The results indicate that there are certain synoptic patterns more prevalent on days with severe weather and some of the stability parameters are better predictors of severe weather days based on locally tuned threat values. The results also revealed the stability parameters that did not display any skill related to severe weather days. An interactive web-based Severe Weather Decision Aid was developed to assist the duty forecaster by providing a level of objective guidance based on the analysis of the stability parameters, CGLSS data, and synoptic-scale dynamics. The tool will be tested and evaluated during the 2005 warm season.

  13. 47 CFR 69.306 - Central office equipment (COE).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... exchange carrier's signalling transfer point and the database shall be assigned to the Line Information Database subelement at § 69.120(a). All other COE Category 2 shall be assigned to the interexchange... requirement. Non-price cap local exchange carriers may use thirty percent of the interstate Local Switching...

  14. 40 CFR 312.26 - Reviews of Federal, State, Tribal, and local government records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... use restrictions, applicable to the subject property. (c) With regard to nearby or adjoining properties, the review of federal, tribal, state, and local government records or databases of government... records of reported releases or threatened releases. Such records or databases containing such records and...

  15. Experiences with the Application of Services Oriented Approaches to the Federation of Heterogeneous Geologic Data Resources

    NASA Astrophysics Data System (ADS)

    Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.

    2006-12-01

    The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.

  16. Hmrbase: a database of hormones and their receptors

    PubMed Central

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, DrugPedia linkage etc. Hmrbase is available online for public from . PMID:19589147

  17. Using GIS databases for simulated nightlight imagery

    NASA Astrophysics Data System (ADS)

    Zollweg, Joshua D.; Gartley, Michael; Roskovensky, John; Mercier, Jeffery

    2012-06-01

    Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM) Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver used to model urban nightlights, given several assumptions. The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development. 2 So, the real assumption being made is that the density of nodes corresponds to development, which is reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent types of nodes produce more realistic results than others. Residential street nodes oered the best performance for any single node type, among the types tested in this investigation. Other node types, however, still provide useful supplementary data. Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a result of the comparison, correction functions were used to correct for discrepancies between simulated and observed radiance. Future work will include investigating more advanced approaches for mapping the spatial extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of a single source type.

  18. ClassLess: A Comprehensive Database of Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Hillenbrand, Lynne A.; baliber, nairn

    2015-08-01

    We have designed and constructed a database intended to house catalog and literature-published measurements of Young Stellar Objects (YSOs) within ~1 kpc of the Sun. ClassLess, so called because it includes YSOs in all stages of evolution, is a relational database in which user interaction is conducted via HTML web browsers, queries are performed in scientific language, and all data are linked to the sources of publication. Each star is associated with a cluster (or clusters), and both spatially resolved and unresolved measurements are stored, allowing proper use of data from multiple star systems. With this fully searchable tool, myriad ground- and space-based instruments and surveys across wavelength regimes can be exploited. In addition to primary measurements, the database self consistently calculates and serves higher level data products such as extinction, luminosity, and mass. As a result, searches for young stars with specific physical characteristics can be completed with just a few mouse clicks. We are in the database population phase now, and are eager to engage with interested experts worldwide on local galactic star formation and young stellar populations.

  19. Adverse drug reactions associated with off-label use of ketorolac, with particular focus on elderly patients. An analysis of the Italian pharmacovigilance database and a population based study.

    PubMed

    Viola, E; Trifirò, G; Ingrasciotta, Y; Sottosanti, L; Tari, M; Giorgianni, F; Moretti, U; Leone, R

    2016-12-01

    This study aims to evaluate the frequency of off-label use of ketorolac in Italy and the related suspected adverse drug reactions (ADRs) reported. All the suspected cases associated with ketorolac recorded in the Italian Pharmacovigilance database were retrieved. Case evaluations were carried out in order to identify the off-label use of ketorolac. Moreover, an analysis of the inappropriate use of ketorolac was conducted using the 'Arianna' database of Caserta local health unit. Up to December 2014, 822 reports of suspected ADRs related to ketorolac were retrieved in the database. The use of ketorolac was classified as off-label for 553 reports and on-label for 269. Among the off-label cases, 58.6% were serious compared to 39.0% of on-label cases. Gastrointestinal events were more frequently reported with off-label use. The analysis of Arianna database showed that 37,729 out of 61,910 patients, were treated off-label. The off-label use of ketorolac is widespread in Italy. This use increases the risk of serious ADR, especially in in case of prolonged duration of treatment and in elderly patients. The Italian Medicine Agency has decided to accurately monitor the appropriate use of the drug in Italy and, if necessary, take measures in order to minimize the risks.

  20. Coordinated Research in Robotics and Integrated Manufacturing.

    DTIC Science & Technology

    1983-07-31

    of three research divisions: Robot Systems, Management Systems, and Integrated Design and Manufacturing, and involves about 40 faculty spanning the...keystone of their program. A relatively smaller level of effort is being supported within the Management Systems Division. This is the first annual...SYSTEMS MANAGEMENT 0 DESIGN DATABASES " ROBOT-BASED 0 HUMAN FACTORSMANUFACTURING • CAD CELL* PRODUCTIONMUCR LANNING * INTEGRATION LANGUAGE AND VIA LOCAL

  1. Crowdsourcing-Assisted Radio Environment Database for V2V Communication.

    PubMed

    Katagiri, Keita; Sato, Koya; Fujii, Takeo

    2018-04-12

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

  2. Crowdsourcing-Assisted Radio Environment Database for V2V Communication †

    PubMed Central

    Katagiri, Keita; Fujii, Takeo

    2018-01-01

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174

  3. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  4. Postmastectomy radiation therapy for lymph node-negative, locally advanced breast cancer after modified radical mastectomy: analysis of the NCI Surveillance, Epidemiology, and End Results database.

    PubMed

    Yu, James B; Wilson, Lynn D; Dasgupta, Tina; Castrucci, William A; Weidhaas, Joanne B

    2008-07-01

    The role of postmastectomy radiotherapy (PMRT) for lymph node-negative locally advanced breast carcinoma (T3N0M0) after modified radical mastectomy (MRM) with regard to improvement in survival remains an area of controversy. The 1973-2004 National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database was examined for patients with T3N0M0 ductal, lobular, or mixed ductal and lobular carcinoma of the breast who underwent MRM, treated from 1988-2003. Patients who were men, who had positive lymph nodes, who survived < or =6 months, for whom breast cancer was not the first malignancy, who had nonbeam radiation, intraoperative or preoperative radiation were excluded. The average treatment effect of PMRT on mortality was estimated with a propensity score case-matched analysis. In all, 1777 patients were identified; 568 (32%) patients received PMRT. Median tumor size was 6.3 cm. The median number of lymph nodes examined was 14 (range, 1-49). Propensity score matched case-control analysis showed no improvement in overall survival with the delivery of PMRT in this group. Older patients, patients with ER- disease (compared with ER+), and patients with high-grade tumors (compared with well differentiated) had increased mortality. The use of PMRT for T3N0M0 breast carcinoma after MRM is not associated with an increase in overall survival. It was not possible to analyze local control in this study given the limitations of the SEER database. The impact of potential improvement in local control as it relates to overall survival should be the subject of further investigation. (Copyright) 2008 American Cancer Society.

  5. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  6. microPIR2: a comprehensive database for human–mouse comparative study of microRNA–promoter interactions

    PubMed Central

    Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades

    2014-01-01

    microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035

  7. 77 FR 67657 - Request for Public Comment: 30-Day Proposed Information Collection: Indian Health Service (IHS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-13

    ... Practice, and Local Effort (BPPPLE) Form.'' Need and Use of Information Collection: The IHS goal is to.../Disease Prevention, Nursing, and Dental) have developed a centralized program database of best practices, promising Practices and local efforts and resources. This database was previously referred as OSCAR, but the...

  8. Designing an Integrated System of Databases: A Workstation for Information Seekers.

    ERIC Educational Resources Information Center

    Micco, Mary; Smith, Irma

    1987-01-01

    Proposes a framework for the design of a full function workstation for information retrieval based on study of information seeking behavior. A large amount of local storage of the CD-ROM jukebox variety and full networking capability to both local and external databases are identified as requirements of the prototype. (MES)

  9. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  10. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  11. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  12. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  13. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  14. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  15. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  16. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    PubMed

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.

  17. Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform

    NASA Astrophysics Data System (ADS)

    Chakraborty, Jayasree; Mukhopadhyay, Sudipta; Rangayyan, Rangaraj M.; Sadhu, Anup; Azevedo-Marques, P. M.

    2013-02-01

    The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.

  18. GIDL: a rule based expert system for GenBank Intelligent Data Loading into the Molecular Biodiversity database

    PubMed Central

    2012-01-01

    Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971

  19. A Synthetic Comparator Approach to Local Evaluation of School-Based Substance Use Prevention Programming.

    PubMed

    Hansen, William B; Derzon, James H; Reese, Eric L

    2014-06-01

    We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.

  20. DIMA.Tools: An R package for working with the database for inventory, monitoring, and assessment

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring, and Assessment (DIMA) is a Microsoft Access database used to collect, store and summarize monitoring data. This database is used by both local and national monitoring efforts within the National Park Service, the Forest Service, the Bureau of Land Management, ...

  1. Introducing local property tax for fiscal decentralization and local authority autonomy

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Thomas; Labropoulos, Tassos; Hadjimitsis, Diafantos G.

    2015-06-01

    Charles Tiebout (1956), in his work "A Pure Theory of Local Expenditures", provides a vision of the workings of the local public sector, acknowledging many similarities to the features of a competitive market, however omitting any references to local taxation. Contrary to other researchers' claim that the Tiebout model and the theory of fiscal decentralization are by no means synonymous, this paper aims to expand Tiebout's theory, by adding the local property tax in the context, introducing a fair, ad valorem property taxation system based on the automated assessment of the value of real estate properties within the boundaries of local authorities. Computer Assisted Mass Appraisal methodology integrated with Remote Sensing technology and GIS analysis is applied to local authorities' property registries and cadastral data, building a spatial relational database and providing data to be statistically processed through Multiple Regression Analysis modeling. The proposed scheme accomplishes economy of scale using CAMA procedures on one hand, but also succeeds in making local authorities self-sufficient through a decentralized, fair, locally calibrated property taxation model, providing rational income administration.

  2. The Metallicity Evolution of Blue Compact Dwarf Galaxies from the Intermediate Redshift to the Local Universe

    NASA Astrophysics Data System (ADS)

    Lian, Jianhui; Hu, Ning; Fang, Guanwen; Ye, Chengyun; Kong, Xu

    2016-03-01

    We present oxygen abundance measurements for 74 blue compact dwarf (BCD) galaxies in the redshift range of [0.2, 0.5] using the strong-line method. The spectra of these objects are taken using Hectospec on the Multiple Mirror Telescope. More than half of these BCDs had dust attenuation corrected using the Balmer decrement method. For comparison, we also selected a sample of 2023 local BCDs from the Sloan Digital Sky Survey (SDSS) database. Based on the local and intermediate-z BCD samples, we investigated the cosmic evolution of the metallicity, star formation rate (SFR), and Dn(4000) index. Compared with local BCDs, the intermediate-z BCDs had a systematically higher R23 ratio but a similar O32 ratio. Interestingly, no significant deviation in the mass-metallicity (MZ) relation was found between the intermediate-z and local BCDs. Besides the metallicity, the intermediate-z BCDs also exhibited an SFR distribution that was consistent with local BCDs, suggesting a weak dependence on redshift. The intermediate-z BCDs seemed to be younger than the local BCDs with lower Dn(4000) index values. The insignificant deviation in the mass-metallicity and mass-SFR relations between intermediate-z and local BCDs indicates that the relations between the global parameters of low-mass compact galaxies may be universal. These results from low-mass compact galaxies could be used to place important observational constraints on galaxy formation and evolution models.

  3. Uses of tuberculosis mortality surveillance to identify programme errors and improve database reporting.

    PubMed

    Selig, L; Guedes, R; Kritski, A; Spector, N; Lapa E Silva, J R; Braga, J U; Trajman, A

    2009-08-01

    In 2006, 848 persons died from tuberculosis (TB) in Rio de Janeiro, Brazil, corresponding to a mortality rate of 5.4 per 100 000 population. No specific TB death surveillance actions are currently in place in Brazil. Two public general hospitals with large open emergency rooms in Rio de Janeiro City. To evaluate the contribution of TB death surveillance in detecting gaps in TB control. We conducted a survey of TB deaths from September 2005 to August 2006. Records of TB-related deaths and deaths due to undefined causes were investigated. Complementary data were gathered from the mortality and TB notification databases. Seventy-three TB-related deaths were investigated. Transmission hazards were identified among firefighters, health care workers and in-patients. Management errors included failure to isolate suspected cases, to confirm TB, to correct drug doses in underweight patients and to trace contacts. Following the survey, 36 cases that had not previously been notified were included in the national TB notification database and the outcome of 29 notified cases was corrected. TB mortality surveillance can contribute to TB monitoring and evaluation by detecting correctable and specific programme- and hospital-based care errors, and by improving the accuracy of TB database reporting. Specific local and programmatic interventions can be proposed as a result.

  4. A spatial database for landslides in northern Bavaria: A methodological approach

    NASA Astrophysics Data System (ADS)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  5. Joint Experimentation on Scalable Parallel Processors (JESPP)

    DTIC Science & Technology

    2006-04-01

    made use of local embedded relational databases, implemented using sqlite on each node of an SPP to execute queries and return results via an ad hoc ...rl.af.mil 12a. DISTRIBUTION / AVAILABILITY STATEENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT...Experimentation Directorate (J9) required expansion of its joint semi-automated forces (JSAF) code capabilities; including number of entities, behavior complexity

  6. Applying Agile Methods to the Development of a Community-Based Sea Ice Observations Database

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Collins, J. A.; Kaufman, M.; Eicken, H.; Parsons, M. A.; Gearheard, S.

    2011-12-01

    Local and traditional knowledge and community-based monitoring programs are increasingly being recognized as an important part of establishing an Arctic observing network, and understanding Arctic environmental change. The Seasonal Ice Zone Observing Network (SIZONet, http://www.sizonet.org) project has implemented an integrated program for observing seasonal ice in Alaska. Observation and analysis by local sea ice experts helps track seasonal and inter-annual variability of the ice cover and its use by coastal communities. The ELOKA project (http://eloka-arctic.org) is collaborating with SIZONet on the development of a community accessible, Web-based application for collecting and distributing local observations. The SIZONet project is dealing with complicated qualitative and quantitative data collected from a growing number of observers in different communities while concurrently working to design a system that will serve a wide range of different end users including Arctic residents, scientists, educators, and other stakeholders with a need for sea ice information. The benefits of linking and integrating knowledge from communities and university-based researchers are clear, however, development of an information system in this multidisciplinary, multi-participant context is challenging. Participants are geographically distributed, have different levels of technical expertise, and have varying goals for how the system will be used. As previously reported (Pulsifer et al. 2010), new technologies have been used to deal with some of the challenges presented in this complex development context. In this paper, we report on the challenges and innovations related to working as a multi-disciplinary software development team. Specifically, we discuss how Agile software development methods have been used in defining and refining user needs, developing prototypes, and releasing a production level application. We provide an overview of the production application that includes discussion of a hybrid architecture that combines a traditional relational database, schema-less database, advanced free text search, and the preliminary framework for Semantic Web support. The current version of the SIZONet web application is discussed in relation to the high-value features defined as part of the Agile approach. Preliminary feedback indicates a system that meets the needs of multiple user groups.

  7. A Java API for working with PubChem datasets.

    PubMed

    Southern, Mark R; Griffin, Patrick R

    2011-03-01

    PubChem is a public repository of chemical structures and associated biological activities. The PubChem BioAssay database contains assay descriptions, conditions and readouts and biological screening results that have been submitted by the biomedical research community. The PubChem web site and Power User Gateway (PUG) web service allow users to interact with the data and raw files are available via FTP. These resources are helpful to many but there can also be great benefit by using a software API to manipulate the data. Here, we describe a Java API with entity objects mapped to the PubChem Schema and with wrapper functions for calling the NCBI eUtilities and PubChem PUG web services. PubChem BioAssays and associated chemical compounds can then be queried and manipulated in a local relational database. Features include chemical structure searching and generation and display of curve fits from stored dose-response experiments, something that is not yet available within PubChem itself. The aim is to provide researchers with a fast, consistent, queryable local resource from which to manipulate PubChem BioAssays in a database agnostic manner. It is not intended as an end user tool but to provide a platform for further automation and tools development. http://code.google.com/p/pubchemdb.

  8. Fifteen hundred guidelines and growing: the UK database of clinical guidelines.

    PubMed

    van Loo, John; Leonard, Niamh

    2006-06-01

    The National Library for Health offers a comprehensive searchable database of nationally approved clinical guidelines, called the Guidelines Finder. This resource, commissioned in 2002, is managed and developed by the University of Sheffield Health Sciences Library. The authors introduce the historical and political dimension of guidelines and the nature of guidelines as a mechanism to ensure clinical effectiveness in practice. The article then outlines the maintenance and organisation of the Guidelines Finder database itself, the criteria for selection, who publishes guidelines and guideline formats, usage of the Guidelines Finder service and finally looks at some lessons learnt from a local library offering a national service. Clinical guidelines are central to effective clinical practice at the national, organisational and individual level. The Guidelines Finder is one of the most visited resources within the National Library for Health and is successful in answering information needs related to specific patient care, clinical research, guideline development and education.

  9. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  10. [LONI & Co: about the epistemic specificity of digital spaces of knowledge in cognitive neuroscience].

    PubMed

    Huber, Lara

    2011-06-01

    In the neurosciences digital databases more and more are becoming important tools of data rendering and distributing. This development is due to the growing impact of imaging based trial design in cognitive neuroscience, including morphological as much as functional imaging technologies. As the case of the 'Laboratory of Neuro Imaging' (LONI) is showing, databases are attributed a specific epistemological power: Since the 1990s databasing is seen to foster the integration of neuroscientific data, although local regimes of data production, -manipulation and--interpretation are also challenging this development. Databasing in the neurosciences goes along with the introduction of new structures of integrating local data, hence establishing digital spaces of knowledge (epistemic spaces): At this stage, inherent norms of digital databases are affecting regimes of imaging-based trial design, for example clinical research into Alzheimer's disease.

  11. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  12. Bayesian pretest probability estimation for primary malignant bone tumors based on the Surveillance, Epidemiology and End Results Program (SEER) database.

    PubMed

    Benndorf, Matthias; Neubauer, Jakob; Langer, Mathias; Kotter, Elmar

    2017-03-01

    In the diagnostic process of primary bone tumors, patient age, tumor localization and to a lesser extent sex affect the differential diagnosis. We therefore aim to develop a pretest probability calculator for primary malignant bone tumors based on population data taking these variables into account. We access the SEER (Surveillance, Epidemiology and End Results Program of the National Cancer Institute, 2015 release) database and analyze data of all primary malignant bone tumors diagnosed between 1973 and 2012. We record age at diagnosis, tumor localization according to the International Classification of Diseases (ICD-O-3) and sex. We take relative probability of the single tumor entity as a surrogate parameter for unadjusted pretest probability. We build a probabilistic (naïve Bayes) classifier to calculate pretest probabilities adjusted for age, tumor localization and sex. We analyze data from 12,931 patients (647 chondroblastic osteosarcomas, 3659 chondrosarcomas, 1080 chordomas, 185 dedifferentiated chondrosarcomas, 2006 Ewing's sarcomas, 281 fibroblastic osteosarcomas, 129 fibrosarcomas, 291 fibrous malignant histiocytomas, 289 malignant giant cell tumors, 238 myxoid chondrosarcomas, 3730 osteosarcomas, 252 parosteal osteosarcomas, 144 telangiectatic osteosarcomas). We make our probability calculator accessible at http://ebm-radiology.com/bayesbone/index.html . We provide exhaustive tables for age and localization data. Results from tenfold cross-validation show that in 79.8 % of cases the pretest probability is correctly raised. Our approach employs population data to calculate relative pretest probabilities for primary malignant bone tumors. The calculator is not diagnostic in nature. However, resulting probabilities might serve as an initial evaluation of probabilities of tumors on the differential diagnosis list.

  13. Thailand mutation and variation database (ThaiMUT).

    PubMed

    Ruangrit, Uttapong; Srikummool, Metawee; Assawamakin, Anunchai; Ngamphiw, Chumpol; Chuechote, Suparat; Thaiprasarnsup, Vilasinee; Agavatpanitch, Gallissara; Pasomsab, Ekawat; Yenchitsomanus, Pa-Thai; Mahasirimongkol, Surakameth; Chantratita, Wasun; Palittapongarnpim, Prasit; Uyyanonvara, Bunyarit; Limwongse, Chanin; Tongsima, Sissades

    2008-08-01

    With the completion of the human genome project, novel sequencing and genotyping technologies had been utilized to detect mutations. Such mutations have continually been produced at exponential rate by researchers in various communities. Based on the population's mutation spectra, occurrences of Mendelian diseases are different across ethnic groups. A proportion of Mendelian diseases can be observed in some countries at higher rates than others. Recognizing the importance of mutation effects in Thailand, we established a National and Ethnic Mutation Database (NEMDB) for Thai people. This database, named Thailand Mutation and Variation database (ThaiMUT), offers a web-based access to genetic mutation and variation information in Thai population. This NEMDB initiative is an important informatics tool for both research and clinical purposes to retrieve and deposit human variation data. The mutation data cataloged in ThaiMUT database were derived from journal articles available in PubMed and local publications. In addition to collected mutation data, ThaiMUT also records genetic polymorphisms located in drug related genes. ThaiMUT could then provide useful information for clinical mutation screening services for Mendelian diseases and pharmacogenomic researches. ThaiMUT can be publicly accessed from http://gi.biotec.or.th/thaimut.

  14. Improving retrospective characterization of the food environment for a large region in the United States during a historic time period.

    PubMed

    Auchincloss, Amy H; Moore, Kari A B; Moore, Latetia V; Diez Roux, Ana V

    2012-11-01

    Access to healthy foods has received increasing attention due to growing prevalence of obesity and diet-related health conditions yet there are major obstacles in characterizing the local food environment. This study developed a method to retrospectively characterize supermarkets for a single historic year, 2005, in 19 counties in 6 states in the USA using a supermarket chain-name list and two business databases. Data preparation, merging, overlaps, added-value amongst various approaches and differences by census tract area-level socio-demographic characteristics are described. Agreement between two food store databases was modest: 63%. Only 55% of the final list of supermarkets were identified by a single business database and selection criteria that included industry classification codes and sales revenue ≥$2 million. The added-value of using a supermarket chain-name list and second business database was identification of an additional 14% and 30% of supermarkets, respectively. These methods are particularly useful to retrospectively characterize access to supermarkets during a historic period and when field observations are not feasible and business databases are used. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. ALDB: a domestic-animal long noncoding RNA database.

    PubMed

    Li, Aimin; Zhang, Junying; Zhou, Zhongyin; Wang, Lei; Liu, Yujuan; Liu, Yajun

    2015-01-01

    Long noncoding RNAs (lncRNAs) have attracted significant attention in recent years due to their important roles in many biological processes. Domestic animals constitute a unique resource for understanding the genetic basis of phenotypic variation and are ideal models relevant to diverse areas of biomedical research. With improving sequencing technologies, numerous domestic-animal lncRNAs are now available. Thus, there is an immediate need for a database resource that can assist researchers to store, organize, analyze and visualize domestic-animal lncRNAs. The domestic-animal lncRNA database, named ALDB, is the first comprehensive database with a focus on the domestic-animal lncRNAs. It currently archives 12,103 pig intergenic lncRNAs (lincRNAs), 8,923 chicken lincRNAs and 8,250 cow lincRNAs. In addition to the annotations of lincRNAs, it offers related data that is not available yet in existing lncRNA databases (lncRNAdb and NONCODE), such as genome-wide expression profiles and animal quantitative trait loci (QTLs) of domestic animals. Moreover, a collection of interfaces and applications, such as the Basic Local Alignment Search Tool (BLAST), the Generic Genome Browser (GBrowse) and flexible search functionalities, are available to help users effectively explore, analyze and download data related to domestic-animal lncRNAs. ALDB enables the exploration and comparative analysis of lncRNAs in domestic animals. A user-friendly web interface, integrated information and tools make it valuable to researchers in their studies. ALDB is freely available from http://res.xaut.edu.cn/aldb/index.jsp.

  16. Temporal trends in long-term survival and cure rates in esophageal cancer: a SEER database analysis.

    PubMed

    Dubecz, Attila; Gall, Isabell; Solymosi, Norbert; Schweigert, Michael; Peters, Jeffrey H; Feith, Marcus; Stein, Hubert J

    2012-02-01

    To assess long-term temporal trends in population-based survival and cure rates in patients with esophageal cancer and compare them over the last 3 decades in the United States. We identified 62,523 patients with cancer of the esophagus and the gastric cardia diagnosed between 1973 and 2007 from the Surveillance, Epidemiology, and End Results database. Long-term cancer-related survival and cure rates were calculated. Stage-by-stage disease-related survival curves of patients diagnosed in different decades were compared. Influence of available variables on survival and cure was analyzed with logistic regression. Ten-year survival was 14% in all patients. Disease-related survival of esophageal cancer improved significantly since 1973. Median survival in Surveillance, Epidemiology, and End Results stages in local, regional, and metastatic cancers improved from 11, 10, and 4 months in the 1970s to 35, 15, and 6 months after 2000. Early stage, age 45 to 65 years at diagnosis and undergoing surgical therapy were independent predictors of 10-year survival. Cure rate improved in all stages during the study period and were 73%, 37%, 12%, and 2% in stages 0, 1, 2, and 4, respectively, after the year 2000. Percentage of patients undergoing surgery improved from 55% in the 1970s to 64% between 2000 and 2007. Proportion of patients diagnosed with in situ and local cancer remains below 30%. Long-term survival with esophageal cancer is poor but survival of local esophageal cancer improved dramatically over the decades. Complete cure of nonmetastatic esophageal cancer seems possible in a growing number of patients. Early diagnosis and treatment are crucial.

  17. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore.

    PubMed

    Ren, Jian; Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0).

  18. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore

    PubMed Central

    Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0). PMID:19783819

  19. A hybrid algorithm for selecting head-related transfer function based on similarity of anthropometric structures

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang-Yang; Wang, Shu-Guang; Gao, Li-Ping

    2010-09-01

    As the basic data for virtual auditory technology, head-related transfer function (HRTF) has many applications in the areas of room acoustic modeling, spatial hearing and multimedia. How to individualize HRTF fast and effectively has become an opening problem at present. Based on the similarity and relativity of anthropometric structures, a hybrid HRTF customization algorithm, which has combined the method of principal component analysis (PCA), multiple linear regression (MLR) and database matching (DM), has been presented in this paper. The HRTFs selected by both the best match and the worst match have been applied into obtaining binaurally auralized sounds, which are then used for subjective listening experiments and the results are compared. For the area in the horizontal plane, the localization results have shown that the selection of HRTFs can enhance the localization accuracy and can also abate the problem of front-back confusion.

  20. Compilation, quality control, analysis, and summary of discrete suspended-sediment and ancillary data in the United States, 1901-2010

    USGS Publications Warehouse

    Lee, Casey J.; Glysson, G. Douglas

    2013-01-01

    Human-induced and natural changes to the transport of sediment and sediment-associated constituents can degrade aquatic ecosystems and limit human uses of streams and rivers. The lack of a dedicated, easily accessible, quality-controlled database of sediment and ancillary data has made it difficult to identify sediment-related water-quality impairments and has limited understanding of how human actions affect suspended-sediment concentrations and transport. The purpose of this report is to describe the creation of a quality-controlled U.S. Geological Survey suspended-sediment database, provide guidance for its use, and summarize characteristics of suspended-sediment data through 2010. The database is provided as an online application at http://cida.usgs.gov/sediment to allow users to view, filter, and retrieve available suspended-sediment and ancillary data. A data recovery, filtration, and quality-control process was performed to expand the availability, representativeness, and utility of existing suspended-sediment data collected by the U.S. Geological Survey in the United States before January 1, 2011. Information on streamflow condition, sediment grain size, and upstream landscape condition were matched to sediment data and sediment-sampling sites to place data in context with factors that may influence sediment transport. Suspended-sediment and selected ancillary data are presented from across the United States with respect to time, streamflow, and landscape condition. Examples of potential uses of this database for identifying sediment-related impairments, assessing trends, and designing new data collection activities are provided. This report and database can support local and national-level decision making, project planning, and data mining activities related to the transport of suspended-sediment and sediment-associated constituents.

  1. Adopting a corporate perspective on databases. Improving support for research and decision making.

    PubMed

    Meistrell, M; Schlehuber, C

    1996-03-01

    The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.

  2. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  3. Iris indexing based on local intensity order pattern

    NASA Astrophysics Data System (ADS)

    Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo

    2017-03-01

    In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.

  4. Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-05

    Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less

  5. Region 9 2010 Census Web Service

    EPA Pesticide Factsheets

    This web service displays data collected during the 2010 U.S. Census. The data are organized into layers representing Tract, Block, and Block Group visualizations. Geography The TIGER Line Files are feature classes and related database files that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census tracts are small, relatively permanent statistical subdivisions of a county or equivalent entity, and were defined by local participants as part of the 2010 Census Participant Statistical Areas Program. The Census Bureau delineated the census tracts in situations where no local participant existed or where all the potential participants declined to participate. The primary purpose of census tracts is to provide a stable set of geographic units for the presentation of census data and comparison back to previous decennial censuses. Census tracts generally have a population size between 1,200 and 8,000 people, with an optimum size of 4,000 people. When first delineated, census tracts were designed to be homogeneous with respect to population characteristics, economic status

  6. The collation of forensic DNA case data into a multi-dimensional intelligence database.

    PubMed

    Walsh, S J; Moss, D S; Kliem, C; Vintiner, G M

    2002-01-01

    The primary aim of any DNA Database is to link individuals to unsolved offenses and unsolved offenses to each other via DNA profiling. This aim has been successfully realised during the operation of the New Zealand (NZ) DNA Databank over the past five years. The DNA Intelligence Project (DIP), a collaborative project involving NZ forensic and law enforcement agencies, interrogated the forensic case data held on the NZ DNA databank and collated it into a functional intelligence database. This database has been used to identify significant trends which direct Police and forensic personnel towards the most appropriate use of DNA technology. Intelligence is being provided in areas such as the level of usage of DNA techniques in criminal investigation, the relative success of crime scene samples and the geographical distribution of crimes. The DIP has broadened the dimensions of the information offered through the NZ DNA Databank and has furthered the understanding and investigative capability of both Police and forensic scientists. The outcomes of this research fit soundly with the current policies of 'intelligence led policing', which are being adopted by Police jurisdictions locally and overseas.

  7. JBioWH: an open-source Java framework for bioinformatics data integration

    PubMed Central

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595

  8. JBioWH: an open-source Java framework for bioinformatics data integration.

    PubMed

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.

  9. Using GenBank.

    PubMed

    Wheeler, David

    2007-01-01

    GenBank(R) is a comprehensive database of publicly available DNA sequences for more than 205,000 named organisms and for more than 60,000 within the embryophyta, obtained through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Daily data exchange with the European Molecular Biology Laboratory (EMBL) in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through the National Center for Biotechnology Information (NCBI) retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases with taxonomy, genome, mapping, protein structure, and domain information and the biomedical journal literature through PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available through FTP. GenBank usage scenarios ranging from local analyses of the data available through FTP to online analyses supported by the NCBI Web-based tools are discussed. To access GenBank and its related retrieval and analysis services, go to the NCBI Homepage at http://www.ncbi.nlm.nih.gov.

  10. Evaluation of a National Call Center and a Local Alerts System for Detection of New Cases of Ebola Virus Disease - Guinea, 2014-2015.

    PubMed

    Lee, Christopher T; Bulterys, Marc; Martel, Lise D; Dahl, Benjamin A

    2016-03-11

    The epidemic of Ebola virus disease (Ebola) in West Africa began in Guinea in late 2013 (1), and on August 8, 2014, the World Health Organization (WHO) declared the epidemic a Public Health Emergency of International Concern (2). Guinea was declared Ebola-free on December 29, 2015, and is under a 90 day period of enhanced surveillance, following 3,351 confirmed and 453 probable cases of Ebola and 2,536 deaths (3). Passive surveillance for Ebola in Guinea has been conducted principally through the use of a telephone alert system. Community members and health facilities report deaths and suspected Ebola cases to local alert numbers operated by prefecture health departments or to a national toll-free call center. The national call center additionally functions as a source of public health information by responding to questions from the public about Ebola. To evaluate the sensitivity of the two systems and compare the sensitivity of the national call center with the local alerts system, the CDC country team performed probabilistic record linkage of the combined prefecture alerts database, as well as the national call center database, with the national viral hemorrhagic fever (VHF) database; the VHF database contains records of all known confirmed Ebola cases. Among 17,309 alert calls analyzed from the national call center, 71 were linked to 1,838 confirmed Ebola cases in the VHF database, yielding a sensitivity of 3.9%. The sensitivity of the national call center was highest in the capital city of Conakry (11.4%) and lower in other prefectures. In comparison, the local alerts system had a sensitivity of 51.1%. Local public health infrastructure plays an important role in surveillance in an epidemic setting.

  11. Designing a Zoo-Based Endangered Species Database.

    ERIC Educational Resources Information Center

    Anderson, Christopher L.

    1989-01-01

    Presented is a class activity that uses the database feature of the Appleworks program to create a database from which students may study endangered species. The use of a local zoo as a base of information about the animals is suggested. Procedures and follow-up activities are included. (CW)

  12. libChEBI: an API for accessing the ChEBI database.

    PubMed

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  13. DECADE web portal: toward the integration of MaGa, EarthChem and VOTW data systems to further the knowledge on Earth degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Frigeri, Alessandro; Lehnert, Kerstin; Ash, Jason; McCormick, Brendan; Chiodini, Giovanni; Fischer, Tobias; Cottrell, Elizabeth

    2015-04-01

    The release of volatiles from the Earth's interior takes place in both volcanic and non-volcanic areas of the planet. The comprehension of such complex process and the improvement of the current estimates of global carbon emissions, will greatly benefit from the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing interoperability between three data systems that will make their data accessible via the DECADE portal: (1) the Smithsonian Institutionian's Global Volcanism Program database (VOTW) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. The DECADE web portal will create a powerful search engine of these databases from a single entry point and will return comprehensive multi-component datasets. A user will be able, for example, to obtain data relating to compositions of emitted gases, compositions and age of the erupted products and coincident activity, of a specific volcano. This level of capability requires a complete synergy between the databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process allowing to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  14. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  15. Effect of microstructure on the elasto-viscoplastic deformation of dual phase titanium structures

    NASA Astrophysics Data System (ADS)

    Ozturk, Tugce; Rollett, Anthony D.

    2018-02-01

    The present study is devoted to the creation of a process-structure-property database for dual phase titanium alloys, through a synthetic microstructure generation method and a mesh-free fast Fourier transform based micromechanical model that operates on a discretized image of the microstructure. A sensitivity analysis is performed as a precursor to determine the statistically representative volume element size for creating 3D synthetic microstructures based on additively manufactured Ti-6Al-4V characteristics, which are further modified to expand the database for features of interest, e.g., lath thickness. Sets of titanium hardening parameters are extracted from literature, and The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions.

  16. The HARPS-N archive through a Cassandra, NoSQL database suite?

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian

    2016-07-01

    The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.

  17. SelTarbase, a database of human mononucleotide-microsatellite mutations and their potential impact to tumorigenesis and immunology

    PubMed Central

    Woerner, Stefan M.; Yuan, Yan P.; Benner, Axel; Korff, Sebastian; von Knebel Doeberitz, Magnus; Bork, Peer

    2010-01-01

    About 15% of human colorectal cancers and, at varying degrees, other tumor entities as well as nearly all tumors related to Lynch syndrome are hallmarked by microsatellite instability (MSI) as a result of a defective mismatch repair system. The functional impact of resulting mutations depends on their genomic localization. Alterations within coding mononucleotide repeat tracts (MNRs) can lead to protein truncation and formation of neopeptides, whereas alterations within untranslated MNRs can alter transcription level or transcript stability. These mutations may provide selective advantage or disadvantage to affected cells. They may further concern the biology of microsatellite unstable cells, e.g. by generating immunogenic peptides induced by frameshifts mutations. The Selective Targets database (http://www.seltarbase.org) is a curated database of a growing number of public MNR mutation data in microsatellite unstable human tumors. Regression calculations for various MSI–H tumor entities indicating statistically deviant mutation frequencies predict TGFBR2, BAX, ACVR2A and others that are shown or highly suspected to be involved in MSI tumorigenesis. Many useful tools for further analyzing genomic DNA, derived wild-type and mutated cDNAs and peptides are integrated. A comprehensive database of all human coding, untranslated, non-coding RNA- and intronic MNRs (MNR_ensembl) is also included. Herewith, SelTarbase presents as a plenty instrument for MSI-carcinogenesis-related research, diagnostics and therapy. PMID:19820113

  18. Most of the tight positional conservation of transcription factor binding sites near the transcription start site reflects their co-localization within regulatory modules.

    PubMed

    Acevedo-Luna, Natalia; Mariño-Ramírez, Leonardo; Halbert, Armand; Hansen, Ulla; Landsman, David; Spouge, John L

    2016-11-21

    Transcription factors (TFs) form complexes that bind regulatory modules (RMs) within DNA, to control specific sets of genes. Some transcription factor binding sites (TFBSs) near the transcription start site (TSS) display tight positional preferences relative to the TSS. Furthermore, near the TSS, RMs can co-localize TFBSs with each other and the TSS. The proportion of TFBS positional preferences due to TFBS co-localization within RMs is unknown, however. ChIP experiments confirm co-localization of some TFBSs genome-wide, including near the TSS, but they typically examine only a few TFs at a time, using non-physiological conditions that can vary from lab to lab. In contrast, sequence analysis can examine many TFs uniformly and methodically, broadly surveying the co-localization of TFBSs with tight positional preferences relative to the TSS. Our statistics found 43 significant sets of human motifs in the JASPAR TF Database with positional preferences relative to the TSS, with 38 preferences tight (±5 bp). Each set of motifs corresponded to a gene group of 135 to 3304 genes, with 42/43 (98%) gene groups independently validated by DAVID, a gene ontology database, with FDR < 0.05. Motifs corresponding to two TFBSs in a RM should co-occur more than by chance alone, enriching the intersection of the gene groups corresponding to the two TFs. Thus, a gene-group intersection systematically enriched beyond chance alone provides evidence that the two TFs participate in an RM. Of the 903 = 43*42/2 intersections of the 43 significant gene groups, we found 768/903 (85%) pairs of gene groups with significantly enriched intersections, with 564/768 (73%) intersections independently validated by DAVID with FDR < 0.05. A user-friendly web site at http://go.usa.gov/3kjsH permits biologists to explore the interaction network of our TFBSs to identify candidate subunit RMs. Gene duplication and convergent evolution within a genome provide obvious biological mechanisms for replicating an RM near the TSS that binds a particular TF subunit. Of all intersections of our 43 significant gene groups, 85% were significantly enriched, with 73% of the significant enrichments independently validated by gene ontology. The co-localization of TFBSs within RMs therefore likely explains much of the tight TFBS positional preferences near the TSS.

  19. Energy-dominated local carbon emissions in Beijing 2007: inventory and input-output analysis.

    PubMed

    Guo, Shan; Liu, J B; Shao, Ling; Li, J S; An, Y R

    2012-01-01

    For greenhouse gas (GHG) emissions by Beijing economy 2007, a concrete emission inventory covering carbon dioxide (CO(2)), methane (CH(4)), and nitrous oxide (N(2)O) is presented and associated with an input-output analysis to reveal the local GHG embodiment in final demand and trade without regard to imported emissions. The total direct GHG emissions amount to 1.06E + 08 t CO(2)-eq, of which energy-related CO(2) emissions comprise 90.49%, non-energy-related CO(2) emissions 6.35%, CH(4) emissions 2.33%, and N(2)O emissions 0.83%, respectively. In terms of energy-related CO(2) emissions, the largest source is coal with a percentage of 53.08%, followed by coke with 10.75% and kerosene with 8.44%. Sector 26 (Construction Industry) holds the top local emissions embodied in final demand of 1.86E + 07 t CO(2)-eq due to its considerable capital, followed by energy-intensive Sectors 27 (Transport and Storage) and 14 (Smelting and Pressing of Ferrous and Nonferrous Metals). The GHG emissions embodied in Beijing's exports are 4.90E + 07 t CO(2)-eq, accounting for 46.01% of the total emissions embodied in final demand. The sound scientific database totally based on local emissions is an important basis to make effective environment and energy policies for local decision makers.

  20. Energy-Dominated Local Carbon Emissions in Beijing 2007: Inventory and Input-Output Analysis

    PubMed Central

    Guo, Shan; Liu, J. B.; Shao, Ling; Li, J. S.; An, Y. R.

    2012-01-01

    For greenhouse gas (GHG) emissions by Beijing economy 2007, a concrete emission inventory covering carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) is presented and associated with an input-output analysis to reveal the local GHG embodiment in final demand and trade without regard to imported emissions. The total direct GHG emissions amount to 1.06E + 08 t CO2-eq, of which energy-related CO2 emissions comprise 90.49%, non-energy-related CO2 emissions 6.35%, CH4 emissions 2.33%, and N2O emissions 0.83%, respectively. In terms of energy-related CO2 emissions, the largest source is coal with a percentage of 53.08%, followed by coke with 10.75% and kerosene with 8.44%. Sector 26 (Construction Industry) holds the top local emissions embodied in final demand of 1.86E + 07 t CO2-eq due to its considerable capital, followed by energy-intensive Sectors 27 (Transport and Storage) and 14 (Smelting and Pressing of Ferrous and Nonferrous Metals). The GHG emissions embodied in Beijing's exports are 4.90E + 07 t CO2-eq, accounting for 46.01% of the total emissions embodied in final demand. The sound scientific database totally based on local emissions is an important basis to make effective environment and energy policies for local decision makers. PMID:23193385

  1. A Java API for working with PubChem datasets

    PubMed Central

    Southern, Mark R.; Griffin, Patrick R.

    2011-01-01

    Summary: PubChem is a public repository of chemical structures and associated biological activities. The PubChem BioAssay database contains assay descriptions, conditions and readouts and biological screening results that have been submitted by the biomedical research community. The PubChem web site and Power User Gateway (PUG) web service allow users to interact with the data and raw files are available via FTP. These resources are helpful to many but there can also be great benefit by using a software API to manipulate the data. Here, we describe a Java API with entity objects mapped to the PubChem Schema and with wrapper functions for calling the NCBI eUtilities and PubChem PUG web services. PubChem BioAssays and associated chemical compounds can then be queried and manipulated in a local relational database. Features include chemical structure searching and generation and display of curve fits from stored dose–response experiments, something that is not yet available within PubChem itself. The aim is to provide researchers with a fast, consistent, queryable local resource from which to manipulate PubChem BioAssays in a database agnostic manner. It is not intended as an end user tool but to provide a platform for further automation and tools development. Availability: http://code.google.com/p/pubchemdb Contact: southern@scripps.edu PMID:21216779

  2. A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.

    PubMed

    Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong

    2018-01-01

    The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.

  3. Food composition database development for between country comparisons.

    PubMed

    Merchant, Anwar T; Dehghan, Mahshid

    2006-01-19

    Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  4. Comprehensive Case Analysis on Participatory Approaches, from Nexus Perspectives

    NASA Astrophysics Data System (ADS)

    Masuhara, N.; Baba, K.

    2014-12-01

    According to Messages from the Bonn2011 Conference, involving local communities fully and effectively in the planning and implementation processes related to water, energy and food nexus for local ownership and commitment should be strongly needed. The participatory approaches such as deliberative polling, "joint fact-finding" and so on have been applied so far to resolve various environmental disputes, however the drivers and barriers in such processes have not been necessarily enough analyzed in a comprehensive manner, especially in Japan. Our research aims to explore solutions for conflicts in the context of water-energy-food nexus in local communities. To achieve it, we clarify drivers and barriers of each approaches applied so far in water, energy and food policy, focusing on how to deal with scientific facts. We generate hypotheses primarily that multi-issue solutions through policy integration will be more effective for conflicts in the context of water-energy-food nexus than single issue solutions for each policy. One of the key factors to formulate effective solutions is to integrate "scientific fact (expert knowledge)" and "local knowledge". Given this primary hypothesis, more specifically, we assume that it is effective for building consensus to provide opportunities to resolve the disagreement of "framing" that stakeholders can offer experts the points for providing scientific facts and that experts can get common understanding of scientific facts in the early stage of the process. To verify the hypotheses, we develop a database of the cases which such participatory approaches have been applied so far to resolve various environmental disputes based on literature survey of journal articles and public documents of Japanese cases. At present, our database is constructing. But it's estimated that conditions of framing and providing scientific information are important driving factors for problem solving and consensus building. And it's important to refine the driving factors, evaluating if components of database are enough to present each process or not.

  5. Evaluation and Analysis of Regional Best Management Practices in San Diego, California (USA)

    NASA Astrophysics Data System (ADS)

    Flint, K.; Kinoshita, A. M.

    2017-12-01

    In urban areas, surface water quality is often impaired due to pollutants transported by stormwater runoff. To maintain and improve surface water quality, the United States Clean Water Act (CWA) requires an evaluation of available water quality information to develop a list of impaired water bodies and establish contaminant restrictions. Structural Best Management Practices (BMPs) are designed to reduce runoff volume and/or pollutant concentrations to comply with CWA requirements. Local level policy makers and managers require an improved understanding of the costs and benefits associated with BMP installation, performance, and maintenance. The International Stormwater BMP Database (Database) is an online platform for submittal of information about existing BMPs, such as cost, design details, and statistical analysis of influent and effluent pollutant concentrations. While the Database provides an aggregation of data which supports analysis of overall BMP performance at international and national scales, the sparse spatial distribution of the data is not suitable for regional and local analysis. This research conducts an extensive review of local inventory and spatial analysis of existing permanent BMPs throughout the San Diego River watershed in California, USA. Information collected from cities within the San Diego River watershed will include BMP types, locations, dates of installation, costs, expected removal efficiencies, monitoring data, and records of maintenance. Aggregating and mapping this information will facilitate BMP evaluation. Specifically, the identification of spatial trends, inconsistencies in BMP performances, and gaps in current records. Regression analysis will provide insight into the nature and significance of correlations between BMP performance and physical characteristics such as land use, soil type, and proximity to impaired waters. This analysis will also result in a metric of relative BMP performance and will provide a basis for future predictions of BMP effectiveness. Ultimately, results from this work will provide information to local governments and agencies for prioritizing, maintaining and monitoring BMPs, and improvement of hydrologic and water quality modeling in urban systems subject to compliance.

  6. Digital Forensics Research: The Next 10 Years

    DTIC Science & Technology

    2010-01-01

    techniques were developed primarily for data recovery. For example, Wood et al. relate a story about two local data recovery experts working for 70 h to...recover the only copy of a highly fragmented database file inadvertently erased by a careless researcher (pp.123e124 Wood et al., 1987). By the late 1980s... Apple , Blackberry, Windows Mobile, Symbian), more than a dozen “proprietary” systems, and more than 100,000 downloadable applications. There are

  7. Observational database for studies of nearby universe

    NASA Astrophysics Data System (ADS)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  8. Atomic Spectroscopic Databases at NIST

    NASA Technical Reports Server (NTRS)

    Reader, J.; Kramida, A. E.; Ralchenko, Yu.

    2006-01-01

    We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.

  9. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    PubMed

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Variability sensitivity of dynamic texture based recognition in clinical CT data

    NASA Astrophysics Data System (ADS)

    Kwitt, Roland; Razzaque, Sharif; Lowell, Jeffrey; Aylward, Stephen

    2014-03-01

    Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.

  11. ESTree db: a Tool for Peach Functional Genomics

    PubMed Central

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Stella, Alessandra; Milanesi, Luciano; Pozzi, Carlo

    2005-01-01

    Background The ESTree db represents a collection of Prunus persica expressed sequenced tags (ESTs) and is intended as a resource for peach functional genomics. A total of 6,155 successful EST sequences were obtained from four in-house prepared cDNA libraries from Prunus persica mesocarps at different developmental stages. Another 12,475 peach EST sequences were downloaded from public databases and added to the ESTree db. An automated pipeline was prepared to process EST sequences using public software integrated by in-house developed Perl scripts and data were collected in a MySQL database. A php-based web interface was developed to query the database. Results The ESTree db version as of April 2005 encompasses 18,630 sequences representing eight libraries. Contig assembly was performed with CAP3. Putative single nucleotide polymorphism (SNP) detection was performed with the AutoSNP program and a search engine was implemented to retrieve results. All the sequences and all the contig consensus sequences were annotated both with blastx against the GenBank nr db and with GOblet against the viridiplantae section of the Gene Ontology db. Links to NiceZyme (Expasy) and to the KEGG metabolic pathways were provided. A local BLAST utility is available. A text search utility allows querying and browsing the database. Statistics were provided on Gene Ontology occurrences to assign sequences to Gene Ontology categories. Conclusion The resulting database is a comprehensive resource of data and links related to peach EST sequences. The Sequence Report and Contig Report pages work as the web interface core structures, giving quick access to data related to each sequence/contig. PMID:16351742

  12. ESTree db: a tool for peach functional genomics.

    PubMed

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Stella, Alessandra; Milanesi, Luciano; Pozzi, Carlo

    2005-12-01

    The ESTree db http://www.itb.cnr.it/estree/ represents a collection of Prunus persica expressed sequenced tags (ESTs) and is intended as a resource for peach functional genomics. A total of 6,155 successful EST sequences were obtained from four in-house prepared cDNA libraries from Prunus persica mesocarps at different developmental stages. Another 12,475 peach EST sequences were downloaded from public databases and added to the ESTree db. An automated pipeline was prepared to process EST sequences using public software integrated by in-house developed Perl scripts and data were collected in a MySQL database. A php-based web interface was developed to query the database. The ESTree db version as of April 2005 encompasses 18,630 sequences representing eight libraries. Contig assembly was performed with CAP3. Putative single nucleotide polymorphism (SNP) detection was performed with the AutoSNP program and a search engine was implemented to retrieve results. All the sequences and all the contig consensus sequences were annotated both with blastx against the GenBank nr db and with GOblet against the viridiplantae section of the Gene Ontology db. Links to NiceZyme (Expasy) and to the KEGG metabolic pathways were provided. A local BLAST utility is available. A text search utility allows querying and browsing the database. Statistics were provided on Gene Ontology occurrences to assign sequences to Gene Ontology categories. The resulting database is a comprehensive resource of data and links related to peach EST sequences. The Sequence Report and Contig Report pages work as the web interface core structures, giving quick access to data related to each sequence/contig.

  13. PhosphoregDB: The tissue and sub-cellular distribution of mammalian protein kinases and phosphatases

    PubMed Central

    Forrest, Alistair RR; Taylor, Darrin F; Fink, J Lynn; Gongora, M Milena; Flegg, Cameron; Teasdale, Rohan D; Suzuki, Harukazu; Kanamori, Mutsumi; Kai, Chikatoshi; Hayashizaki, Yoshihide; Grimmond, Sean M

    2006-01-01

    Background Protein kinases and protein phosphatases are the fundamental components of phosphorylation dependent protein regulatory systems. We have created a database for the protein kinase-like and phosphatase-like loci of mouse that integrates protein sequence, interaction, classification and pathway information with the results of a systematic screen of their sub-cellular localization and tissue specific expression data mined from the GNF tissue atlas of mouse. Results The database lets users query where a specific kinase or phosphatase is expressed at both the tissue and sub-cellular levels. Similarly the interface allows the user to query by tissue, pathway or sub-cellular localization, to reveal which components are co-expressed or co-localized. A review of their expression reveals 30% of these components are detected in all tissues tested while 70% show some level of tissue restriction. Hierarchical clustering of the expression data reveals that expression of these genes can be used to separate the samples into tissues of related lineage, including 3 larger clusters of nervous tissue, developing embryo and cells of the immune system. By overlaying the expression, sub-cellular localization and classification data we examine correlations between class, specificity and tissue restriction and show that tyrosine kinases are more generally expressed in fewer tissues than serine/threonine kinases. Conclusion Together these data demonstrate that cell type specific systems exist to regulate protein phosphorylation and that for accurate modelling and for determination of enzyme substrate relationships the co-location of components needs to be considered. PMID:16504016

  14. ATGC database and ATGC-COGs: an updated resource for micro- and macro-evolutionary studies of prokaryotic genomes and protein family annotation

    PubMed Central

    Kristensen, David M.; Wolf, Yuri I.; Koonin, Eugene V.

    2017-01-01

    The Alignable Tight Genomic Clusters (ATGCs) database is a collection of closely related bacterial and archaeal genomes that provides several tools to aid research into evolutionary processes in the microbial world. Each ATGC is a taxonomy-independent cluster of 2 or more completely sequenced genomes that meet the objective criteria of a high degree of local gene order (synteny) and a small number of synonymous substitutions in the protein-coding genes. As such, each ATGC is suited for analysis of microevolutionary variations within a cohesive group of organisms (e.g. species), whereas the entire collection of ATGCs is useful for macroevolutionary studies. The ATGC database includes many forms of pre-computed data, in particular ATGC-COGs (Clusters of Orthologous Genes), multiple sequence alignments, a set of ‘index’ orthologs representing the most well-conserved members of each ATGC-COG, the phylogenetic tree of the organisms within each ATGC, etc. Although the ATGC database contains several million proteins from thousands of genomes organized into hundreds of clusters (roughly a 4-fold increase since the last version of the ATGC database), it is now built with completely automated methods and will be regularly updated following new releases of the NCBI RefSeq database. The ATGC database is hosted jointly at the University of Iowa at dmk-brain.ecn.uiowa.edu/ATGC/ and the NCBI at ftp.ncbi.nlm.nih.gov/pub/kristensen/ATGC/atgc_home.html. PMID:28053163

  15. A comprehensive linear programming tool to optimize formulations of ready-to-use therapeutic foods: an application to Ethiopia.

    PubMed

    Ryan, Kelsey N; Adams, Katherine P; Vosti, Stephen A; Ordiz, M Isabel; Cimo, Elizabeth D; Manary, Mark J

    2014-12-01

    Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed international and national crop and animal food databases was used to create a global and local candidate ingredient database. The database included information about each ingredient regarding nutrient composition, ingredient category, regional availability, and food safety, processing, and price. An LP tool was then designed to compose novel RUTF formulations. For the example case of Ethiopia, the objective was to minimize the ingredient cost of RUTF; the decision variables were ingredient weights and the extent of use of locally available ingredients, and the constraints were nutritional and product-quality related. Of the new RUTF formulations found by the LP tool for Ethiopia, 32 were predicted to be feasible for creating a paste, and these were prepared in the laboratory. Palatable final formulations contained a variety of ingredients, including fish, different dairy powders, and various seeds, grains, and legumes. Nearly all of the macronutrient values calculated by the LP tool differed by <10% from results produced by laboratory analyses, but the LP tool consistently underestimated total energy. The LP tool can be used to develop new RUTF formulations that make more use of locally available ingredients. This tool has the potential to lead to production of a variety of low-cost RUTF formulations that meet international standards and thereby potentially allow more children to be treated for SAM. © 2014 American Society for Nutrition.

  16. 1986 Year End Report for Road Following at Carnegie-Mellon

    DTIC Science & Technology

    1987-05-01

    how to make them work efficiently. We designed a hierarchical structure and a monitor module which manages all parts of the hierarchy (see figure 1...database, called the Local Map, is managed by a program known as the Local Map Builder (LMB). Each module stores and retrieves information in the...knowledge-intensive modules, and a database manager that synchronizes the modules-is characteristic of a traditional blackboard system. Such a system is

  17. Systematic Analysis of Arabidopsis Organelles and a Protein Localization Database for Facilitating Fluorescent Tagging of Full-Length Arabidopsis Proteins1[W

    PubMed Central

    Li, Shijun; Ehrhardt, David W.; Rhee, Seung Y.

    2006-01-01

    Cells are organized into a complex network of subcellular compartments that are specialized for various biological functions. Subcellular location is an important attribute of protein function. To facilitate systematic elucidation of protein subcellular location, we analyzed experimentally verified protein localization data of 1,300 Arabidopsis (Arabidopsis thaliana) proteins. The 1,300 experimentally verified proteins are distributed among 40 different compartments, with most of the proteins localized to four compartments: mitochondria (36%), nucleus (28%), plastid (17%), and cytosol (13.3%). About 19% of the proteins are found in multiple compartments, in which a high proportion (36.4%) is localized to both cytosol and nucleus. Characterization of the overrepresented Gene Ontology molecular functions and biological processes suggests that the Golgi apparatus and peroxisome may play more diverse functions but are involved in more specialized processes than other compartments. To support systematic empirical determination of protein subcellular localization using a technology called fluorescent tagging of full-length proteins, we developed a database and Web application to provide preselected green fluorescent protein insertion position and primer sequences for all Arabidopsis proteins to study their subcellular localization and to store experimentally verified protein localization images, videos, and their annotations of proteins generated using the fluorescent tagging of full-length proteins technology. The database can be searched, browsed, and downloaded using a Web browser at http://aztec.stanford.edu/gfp/. The software can also be downloaded from the same Web site for local installation. PMID:16617091

  18. Feline mitochondrial DNA sampling for forensic analysis: when enough is enough!

    PubMed

    Grahn, Robert A; Alhaddad, Hasan; Alves, Paulo C; Randi, Ettore; Waly, Nashwa E; Lyons, Leslie A

    2015-05-01

    Pet hair has a demonstrated value in resolving legal issues. Cat hair is chronically shed and it is difficult to leave a home with cats without some level of secondary transfer. The power of cat hair as an evidentiary resource may be underused because representative genetic databases are not available for exclusionary purposes. Mitochondrial control region databases are highly valuable for hair analyses and have been developed for the cat. In a representative worldwide data set, 83% of domestic cat mitotypes belong to one of twelve major types. Of the remaining 17%, 7.5% are unique within the published 1394 sample database. The current research evaluates the sample size necessary to establish a representative population for forensic comparison of the mitochondrial control region for the domestic cat. For most worldwide populations, randomly sampling 50 unrelated local individuals will achieve saturation at 95%. The 99% saturation is achieved by randomly sampling 60-170 cats, depending on the numbers of mitotypes available in the population at large. Likely due to the recent domestication of the cat and minimal localized population substructure, fewer cats are needed to meet mitochondria DNA control region database practical saturation than for humans or dogs. Coupled with the available worldwide feline control region database of nearly 1400 cats, minimal local sampling will be required to establish an appropriate comparative representative database and achieve significant exclusionary power. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. The Ruby UCSC API: accessing the UCSC genome database using Ruby.

    PubMed

    Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro

    2012-09-21

    The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  20. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    PubMed Central

    2012-01-01

    Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508

  1. Interacting with the National Database for Autism Research (NDAR) via the LONI Pipeline workflow environment.

    PubMed

    Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell

    2015-03-01

    Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.

  2. Techno-politics of genomic nationalism: tracing genomics and its use in drug regulation in Japan and Taiwan.

    PubMed

    Kuo, Wen-Hua

    2011-10-01

    This paper compares the development of genomics as a form of state project in Japan and Taiwan. Broadening the concepts of genomic sovereignty and bionationalism, I argue that the establishment and use of genomic databases vary according to techno-political context. While both Japan and Taiwan hold population-based databases to be necessary for scientific advance and competitiveness, they differ in how they have attempted to transform the information produced by databases into regulatory schemes for drug approval. The effectiveness of Taiwan's biobank is severely limited by the IRB reviewing process. By contrast, while updating its regulations for drug approval, Japan, is using pharmacogenomics to deal with matters relating to ethnic identity. By analysing genomic initiatives in the political context that nurtures them, this paper seeks to capture how global science and local societies interact and offers insight into the assessment of state-sponsored science in East Asia as they become transnational. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  4. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  5. The construction and periodicity analysis of natural disaster database of Alxa area based on Chinese local records

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Mingzhong, Tian; Hengli, Wang

    2010-05-01

    Chinese hand-written local records were originated from the first century. Generally, these local records include geography, evolution, customs, education, products, people, historical sites, as well as writings of an area. Through such endeavors, the information of the natural materials of China nearly has had no "dark ages" in the evolution of its 5000-year old civilization. A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia, the second largest desert in China, is used here for the construction of a 500-year high resolution database. The database is divided into subsets according to the types of natural-disasters like sand-dust storm, drought events, cold wave, etc. Through applying trend, correlation, wavelet, and spectral analysis on these data, we can estimate the statistically periodicity of different natural-disasters, detect and quantify similarities and patterns of the periodicities of these records, and finally take these results in aggregate to find a strong and coherent cyclicity through the last 500 years which serves as the driving mechanism of these geological hazards. Based on the periodicity obtained from the above analysis, the paper discusses the probability of forecasting natural-disasters and the suitable measures to reduce disaster losses through history records. Keyword: Chinese local records; Alxa; natural disasters; database; periodicity analysis

  6. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  7. Stellar Abundances for Galactic Archaeology Database. IV. Compilation of stars in dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hidaka, Jun; Aoki, Wako; Katsuta, Yutaka; Yamada, Shimako; Fujimoto, Masayuki Y.; Ohtani, Yukari; Masuyama, Miyu; Noda, Kazuhiro; Wada, Kentaro

    2017-10-01

    We have constructed a database of stars in Local Group galaxies using the extended version of the SAGA (Stellar Abundances for Galactic Archaeology) database that contains stars in 24 dwarf spheroidal galaxies and ultra-faint dwarfs. The new version of the database includes more than 4500 stars in the Milky Way, by removing the previous metallicity criterion of [Fe/H] ≤ -2.5, and more than 6000 stars in the Local Group galaxies. We examined the validity of using a combined data set for elemental abundances. We also checked the consistency between the derived distances to individual stars and those to galaxies as given in the literature. Using the updated database, the characteristics of stars in dwarf galaxies are discussed. Our statistical analyses of α-element abundances show that the change of the slope of the [α/Fe] relative to [Fe/H] (so-called "knee") occurs at [Fe/H] = -1.0 ± 0.1 for the Milky Way. The knee positions for selected galaxies are derived by applying the same method. The star formation history of individual galaxies is explored using the slope of the cumulative metallicity distribution function. Radial gradients along the four directions are inspected in six galaxies where we find no direction-dependence of metallicity gradients along the major and minor axes. The compilation of all the available data shows a lack of CEMP-s population in dwarf galaxies, while there may be some CEMP-no stars at [Fe/H] ≲ -3 even in the very small sample. The inspection of the relationship between Eu and Ba abundances confirms an anomalously Ba-rich population in Fornax, which indicates a pre-enrichment of interstellar gas with r-process elements. We do not find any evidence of anti-correlations in O-Na and Mg-Al abundances, which characterizes the abundance trends in the Galactic globular clusters.

  8. Cloning and expression of N-glycosylation-related glucosidase from Glaciozyma antarctica

    NASA Astrophysics Data System (ADS)

    Yajit, Noor Liana Mat; Kamaruddin, Shazilah; Hashim, Noor Haza Fazlin; Bakar, Farah Diba Abu; Murad, Abd. Munir Abd.; Mahadi, Nor Muhammad; Mackeen, Mukram Mohamed

    2016-11-01

    The need for functional oligosaccharides in various field is ever growing. The enzymatic approach for synthesis of oligosaccharides is advantageous over traditional chemical synthesis because of the regio- and stereo- selectivity that can be achieved without the need for protection chemistry. In this study, the α-glucosidase I protein sequence from Saccharomyces cerevisiae (UniProt database) was compared using Basic Local Alignment Search Tool (BLAST) with Glaciozyma antarctica genome database. Results showed 33% identity and an E-value of 1 × 10-125 for α-glucosidase I. The gene was amplified, cloned into the pPICZα C vector and used to transform Pichia pastoris X-33 cells. Soluble expression of α-Glucosidase I (˜91 kDa) was achieved at 28 °C with 1.0 % of methanol.

  9. R2 & NE Tract - 2010 Census; Housing and Population Summary

    EPA Pesticide Factsheets

    The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census tracts are small, relatively permanent statistical subdivisions of a county or equivalent entity, and were defined by local participants as part of the 2010 Census Participant Statistical Areas Program. The Census Bureau delineated the census tracts in situations where no local participant existed or where all the potential participants declined to participate. The primary purpose of census tracts is to provide a stable set of geographic units for the presentation of census data and comparison back to previous decennial censuses. Census tracts generally have a population size between 1,200 and 8,000 people, with an optimum size of 4,000 people. When first delineated, census tracts were designed to be homogeneous with respect to population characteristics, economic status, and living conditions. The spatial size of census tracts varies widely depending on the density of settlement. Physical changes in street patterns caused by highway construction, new

  10. LOCATE: a mouse protein subcellular localization database

    PubMed Central

    Fink, J. Lynn; Aturaliya, Rajith N.; Davis, Melissa J.; Zhang, Fasheng; Hanson, Kelly; Teasdale, Melvena S.; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Teasdale, Rohan D.

    2006-01-01

    We present here LOCATE, a curated, web-accessible database that houses data describing the membrane organization and subcellular localization of proteins from the FANTOM3 Isoform Protein Sequence set. Membrane organization is predicted by the high-throughput, computational pipeline MemO. The subcellular locations of selected proteins from this set were determined by a high-throughput, immunofluorescence-based assay and by manually reviewing >1700 peer-reviewed publications. LOCATE represents the first effort to catalogue the experimentally verified subcellular location and membrane organization of mammalian proteins using a high-throughput approach and provides localization data for ∼40% of the mouse proteome. It is available at . PMID:16381849

  11. Specific character of citations in historiography (using the example of Polish history).

    PubMed

    Kolasa, Władysław Marek

    2012-03-01

    The first part of the paper deals with the assessment of international databases in relation to the number of historical publications (representation and relevance in comparison with the model database). The second part is focused on providing answer to the question whether historiography is governed by similar bibliometric rules as exact sciences or whether it has its own specific character. Empirical database for this part of the research constituted the database prepared ad hoc: The Citation Index of the History of Polish Media (CIHPM). Among numerous typically historical features the main focus was put on: linguistic localism, specific character of publishing forms, differences in citing of various sources (contributions and syntheses) and specific character of the authorship (the Lorenz Curve and the Lotka's Law). Slightly more attention was devoted to the half-life indicator and its role in a diachronic study of a scientific field; also, a new indicator (HL14), depicting distribution of citations younger then half-life was introduced. Additionally, the comparison and correlation of selected parameters for the body of historical science (citations, HL14, the Hirsch Index, number of publications, volume and other) were also conducted.

  12. HUNT: launch of a full-length cDNA database from the Helix Research Institute.

    PubMed

    Yudate, H T; Suwa, M; Irie, R; Matsui, H; Nishikawa, T; Nakamura, Y; Yamaguchi, D; Peng, Z Z; Yamamoto, T; Nagai, K; Hayashi, K; Otsuki, T; Sugiyama, T; Ota, T; Suzuki, Y; Sugano, S; Isogai, T; Masuho, Y

    2001-01-01

    The Helix Research Institute (HRI) in Japan is releasing 4356 HUman Novel Transcripts and related information in the newly established HUNT database. The institute is a joint research project principally funded by the Japanese Ministry of International Trade and Industry, and the clones were sequenced in the governmental New Energy and Industrial Technology Development Organization (NEDO) Human cDNA Sequencing Project. The HUNT database contains an extensive amount of annotation from advanced analysis and represents an essential bioinformatics contribution towards understanding of the gene function. The HRI human cDNA clones were obtained from full-length enriched cDNA libraries constructed with the oligo-capping method and have resulted in novel full-length cDNA sequences. A large fraction has little similarity to any proteins of known function and to obtain clues about possible function we have developed original analysis procedures. Any putative function deduced here can be validated or refuted by complementary analysis results. The user can also extract information from specific categories like PROSITE patterns, PFAM domains, PSORT localization, transmembrane helices and clones with GENIUS structure assignments. The HUNT database can be accessed at http://www.hri.co.jp/HUNT.

  13. MPD: a pathogen genome and metagenome database

    PubMed Central

    Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen

    2018-01-01

    Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040

  14. Scalable global grid catalogue for Run3 and beyond

    NASA Astrophysics Data System (ADS)

    Martinez Pedreira, M.; Grigoras, C.; ALICE Collaboration

    2017-10-01

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.

  15. Data mining and visualization of the Alabama accident database

    DOT National Transportation Integrated Search

    2000-08-01

    The Alabama Department of Public Safety has developed and maintains a centralized database that contain traffic accident data collected from crash report completed by local police officers and state troopers. The Critical Analysis Reporting Environme...

  16. MEDLINE: the options for health professionals.

    PubMed

    Wood, E H

    1994-01-01

    The bibliographic database MEDLINE, produced by the National Library of Medicine (NLM), is a computerized index to the world's biomedical literature. The database can be searched back to 1966 and contains 6.8 million records. The various means of access are divided, for the purposes of this article, into three categories: logging onto a remote host computer by telephone and modem or by the Internet; subscribing to part or all of the database on compact disc (CD-ROM); and leasing the data on a transport medium such as magnetic tape or CDs for loading on a local host computer. Decisions about which method is preferable in a given situation depend on cost, availability of hardware and software, local expertise, and the size of the intended user population. Trends include increased access to the Internet by health professionals, increased network speed, links from MEDLINE records to full-text databases or online journals, and integration of MEDLINE into wider health information systems.

  17. Secure Indoor Localization Based on Extracting Trusted Fingerprint

    PubMed Central

    Yin, Xixi; Zheng, Yanliu; Wang, Chun

    2018-01-01

    Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms. PMID:29401755

  18. Secure Indoor Localization Based on Extracting Trusted Fingerprint.

    PubMed

    Luo, Juan; Yin, Xixi; Zheng, Yanliu; Wang, Chun

    2018-02-05

    [-5]Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms.

  19. Incident reporting in one UK accident and emergency department.

    PubMed

    Tighe, Catherine M; Woloshynowych, Maria; Brown, Ruth; Wears, Bob; Vincent, Charles

    2006-01-01

    Greater focus is needed on improving patient safety in modern healthcare systems and the first step to achieving this is to reliably identify the safety issues arising in healthcare. Research has shown the accident and emergency (A&E) department to be a particularly problematic environment where safety is a concern due to various factors, such as the range, nature and urgency of presenting conditions and the high turnover of patients. As in all healthcare environments clinical incident reporting in A&E is an important tool for detecting safety issues which can result in identifying solutions, learning from error and enhancing patient safety. This tool must be responsive and flexible to the local circumstances and work for the department to support the clinical governance agenda. In this paper, we describe the local processes for reporting and reviewing clinical incidents in one A&E department in a London teaching hospital and report recent changes to the system within the department. We used the historical data recorded on the Trust incident database as a representation of the information that would be available to the department in order to identify the high risk areas. In this paper, we evaluate the internal processes, the information available on the database and make recommendations to assist the emergency department in their internal processes. These will strengthen the internal review and staff feedback system so that the department can learn from incidents in a consistent manner. The process was reviewed by detailed examination of the centrally held electronic record (Datix database) of all incidents reported in a one year period. The nature of the incident and the level and accuracy of information provided in the incident reports was evaluated. There were positive aspects to the established system including evidence of positive changes made as a result of the reporting process, new initiatives to feedback to staff, and evolution of the programme for reporting and discussing the incidents internally. There appeared to be a mismatch between the recorded events and the category allocated to the incident in the historical record. In addition the database did not contain complete information for every incident, contributory factors were rarely recorded and relatively large numbers of incidents were recorded as "other" in the type of incident. There was also observed difficulty in updating the system as there is at least a months time lag between reporting or an incident and discussion/resolution of issues at the local departmental clinical risk management committee meetings. We used Leape's model for assessing the reporting system as a whole and found the system in the department to be relatively safe, fairly easy to use and moderately effective. Recommendations as a result of this study include the introduction of an electronic reporting system, limiting the number of staff who categorise the incidents--using clear definitions for classifications including a structured framework for contributory factors, and a process that allows incidents to be updated on the database locally after the discussion. This research may have implications for the incident reporting process in other specialities as well as in other hospitals.

  20. AT_CHLORO, a comprehensive chloroplast proteome database with subplastidial localization and curated information on envelope proteins.

    PubMed

    Ferro, Myriam; Brugière, Sabine; Salvi, Daniel; Seigneurin-Berny, Daphné; Court, Magali; Moyet, Lucas; Ramus, Claire; Miras, Stéphane; Mellal, Mourad; Le Gall, Sophie; Kieffer-Jaquinod, Sylvie; Bruley, Christophe; Garin, Jérôme; Joyard, Jacques; Masselon, Christophe; Rolland, Norbert

    2010-06-01

    Recent advances in the proteomics field have allowed a series of high throughput experiments to be conducted on chloroplast samples, and the data are available in several public databases. However, the accurate localization of many chloroplast proteins often remains hypothetical. This is especially true for envelope proteins. We went a step further into the knowledge of the chloroplast proteome by focusing, in the same set of experiments, on the localization of proteins in the stroma, the thylakoids, and envelope membranes. LC-MS/MS-based analyses first allowed building the AT_CHLORO database (http://www.grenoble.prabi.fr/protehome/grenoble-plant-proteomics/), a comprehensive repertoire of the 1323 proteins, identified by 10,654 unique peptide sequences, present in highly purified chloroplasts and their subfractions prepared from Arabidopsis thaliana leaves. This database also provides extensive proteomics information (peptide sequences and molecular weight, chromatographic retention times, MS/MS spectra, and spectral count) for a unique chloroplast protein accurate mass and time tag database gathering identified peptides with their respective and precise analytical coordinates, molecular weight, and retention time. We assessed the partitioning of each protein in the three chloroplast compartments by using a semiquantitative proteomics approach (spectral count). These data together with an in-depth investigation of the literature were compiled to provide accurate subplastidial localization of previously known and newly identified proteins. A unique knowledge base containing extensive information on the proteins identified in envelope fractions was thus obtained, allowing new insights into this membrane system to be revealed. Altogether, the data we obtained provide unexpected information about plastidial or subplastidial localization of some proteins that were not suspected to be associated to this membrane system. The spectral counting-based strategy was further validated as the compartmentation of well known pathways (for instance, photosynthesis and amino acid, fatty acid, or glycerolipid biosynthesis) within chloroplasts could be dissected. It also allowed revisiting the compartmentation of the chloroplast metabolism and functions.

  1. Slope-based and geometric encoding of a goal location by the terrestrial toad (Rhinella arenarum).

    PubMed

    Sotelo, María Inés; Bingman, Verner P; Muzio, Rubén N

    2017-11-01

    The current study was designed to test for the ability of terrestrial toads, Rhinella arenarum , to use slope as source of spatial information to locate a goal, and investigate the relative importance of slope and geometric information for goal localization. Toads were trained to locate a single, water-reward goal location in a corner of a rectangular arena placed on an incline. Once the toads learned the task, 3 types of probe trials were carried out to determine the relative use of slope and geometric information for goal localization. The probe trials revealed that the toads were able to independently use slope, and as previously reported, geometry to locate the goal. However, the boundary geometry of the experimental arena was found to be preferentially used by the toads when geometric and slope information were set in conflict. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  3. Enhancing Geoscience Research Discovery Through the Semantic Web

    NASA Astrophysics Data System (ADS)

    Rowan, Linda R.; Gross, M. Benjamin; Mayernik, Matthew; Khan, Huda; Boler, Frances; Maull, Keith; Stott, Don; Williams, Steve; Corson-Rikert, Jon; Johns, Erica M.; Daniels, Michael; Krafft, Dean B.; Meertens, Charles

    2016-04-01

    UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, a U.S. National Science Foundation EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to enhance connectivity across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Much of the VIVO ontology was built for the life sciences, so we have added some components of existing geoscience-based ontologies and a few terms from a local ontology that we created. The UNAVCO VIVO instance, connect.unavco.org, utilizes persistent identifiers whenever possible; for example using ORCIDs for people, publication DOIs, data DOIs and unique NSF grant numbers. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page shows, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can be queried using SPARQL, a query language for semantic data. EarthCollab is extending the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. About half of UNAVCO's membership is international and we hope to connect our data to institutions in other countries with a similar approach. Additional extensions, including enhanced geospatial capabilities, will be developed based on task-centered usability testing.

  4. The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts

    Treesearch

    L.N. Hudson; T. Newbold; S. Contu

    2014-01-01

    Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that...

  5. Eye center localization and gaze gesture recognition for human-computer interaction.

    PubMed

    Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul

    2016-03-01

    This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.

  6. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  7. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    NASA Astrophysics Data System (ADS)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  8. USGS Nonindigenous Aquatic Species database with a focus on the introduced fishes of the lower Tennessee and Cumberland drainages

    USGS Publications Warehouse

    Fuller, Pamela L.; Cannister, Matthew; Johansen, Rebecca; Estes, L. Dwayne; Hamilton, Steven W.; Barrass, Andrew N.

    2013-01-01

    The Nonindigenous Aquatic Species (NAS) database (http://nas.er.usgs.gov) functions as a national repository and clearinghouse for occurrence data for introduced species within the United States. Included is locality information on over 1,100 species of vertebrates, invertebrates, and vascular plants introduced as early as 1850. Taxa include foreign (exotic) species and species native to North America that have been transported outside of their natural range. Locality data are obtained from published and unpublished literature, state, federal and local monitoring programs, museum accessions, on-line databases, websites, professional communications and on-line reporting forms. The NAS web site provides immediate access to new occurrence records through a real-time interface with the NAS database. Visitors to the web site are presented with a set of pre-defined queries that generate lists of species according to state or hydrologic basin of interest. Fact sheets, distribution maps, and information on new occurrences are updated as new records and information become available. The NAS database allows resource managers to learn of new introductions reported in their region or nearby regions, improving response time. Conversely, managers are encouraged to report their observations of new occurrences to the NAS database so information can be disseminated to other managers, researchers, and the public. In May 2004, the NAS database incorporated an Alert System to notify registered users of new introductions as part of a national early detection/rapid response system. Users can register to receive alerts based on geographic or taxonomic criteria. The NAS database was used to identify 23 fish species introduced into the lower Tennessee and Cumberland drainages. Most of these are sport fish stocked to support fisheries, but the list also includes accidental and illegal introductions such as Asian Carps, clupeids, various species popular in the aquarium trade, and Atlantic Needlefish (Strongylura marina) that was introduced via the newly-constructed Tennessee-Tombigbee Canal.

  9. Water impacts and water-climate goal conflicts of local energy choices - notes from a Swedish perspective

    NASA Astrophysics Data System (ADS)

    Ericsdotter Engström, Rebecka; Howells, Mark; Destouni, Georgia

    2018-02-01

    To meet both the Paris Agreement on Climate Change and the UN Sustainable Development Goals (SDGs), nations, sectors, counties and cities need to move towards a sustainable energy system in the next couple of decades. Such energy system transformations will impact water resources to varying extents, depending on the transformation strategy and fuel choices. Sweden is considered to be one of the most advanced countries towards meeting the SDGs. This paper explores the geographical origin of and the current water use associated with the supply of energy in the 21 regional counties of Sweden. These energy-related uses of water represent indirect, but still relevant, impacts for water management and the related SDG on clean water and sanitation (SDG 6). These indirect water impacts are here quantified and compared to reported quantifications of direct local water use, as well as to reported greenhouse gas (GHG) emissions, as one example of other types of environmental impacts of local energy choices in each county. For each county, an accounting model is set up based on data for the local energy use in year 2010, and the specific geographical origins and water use associated with these locally used energy carriers (fuels, heat and electricity) are further estimated and mapped based on data reported in the literature and open databases. Results show that most of the water use associated with the local Swedish energy use occurs outside of Sweden. Counties with large shares of liquid biofuel exhibit the largest associated indirect water use in regions outside of Sweden. This indirect water use for energy supply does not unambiguously correlate with either the local direct water use or the local GHG emissions, although for the latter, there is a tendency towards an inverse relation. Overall, the results imply that actions for mitigation of climate change by local energy choices may significantly affect water resources elsewhere. Swedish counties are thus important examples of localities with large geographic zones of water influence due to their local energy choices, which may compromise water security and the possibility to meet water-related global goals in other world regions.

  10. DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.

    2014-12-01

    The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  11. Technology and the Modern Library.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    1984-01-01

    Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  13. Global effects of land use on local terrestrial biodiversity.

    PubMed

    Newbold, Tim; Hudson, Lawrence N; Hill, Samantha L L; Contu, Sara; Lysenko, Igor; Senior, Rebecca A; Börger, Luca; Bennett, Dominic J; Choimes, Argyrios; Collen, Ben; Day, Julie; De Palma, Adriana; Díaz, Sandra; Echeverria-Londoño, Susy; Edgar, Melanie J; Feldman, Anat; Garon, Morgan; Harrison, Michelle L K; Alhusseini, Tamera; Ingram, Daniel J; Itescu, Yuval; Kattge, Jens; Kemp, Victoria; Kirkpatrick, Lucinda; Kleyer, Michael; Correia, David Laginha Pinto; Martin, Callum D; Meiri, Shai; Novosolov, Maria; Pan, Yuan; Phillips, Helen R P; Purves, Drew W; Robinson, Alexandra; Simpson, Jake; Tuck, Sean L; Weiher, Evan; White, Hannah J; Ewers, Robert M; Mace, Georgina M; Scharlemann, Jörn P W; Purvis, Andy

    2015-04-02

    Human activities, especially conversion and degradation of habitats, are causing global biodiversity declines. How local ecological assemblages are responding is less clear--a concern given their importance for many ecosystem functions and services. We analysed a terrestrial assemblage database of unprecedented geographic and taxonomic coverage to quantify local biodiversity responses to land use and related changes. Here we show that in the worst-affected habitats, these pressures reduce within-sample species richness by an average of 76.5%, total abundance by 39.5% and rarefaction-based richness by 40.3%. We estimate that, globally, these pressures have already slightly reduced average within-sample richness (by 13.6%), total abundance (10.7%) and rarefaction-based richness (8.1%), with changes showing marked spatial variation. Rapid further losses are predicted under a business-as-usual land-use scenario; within-sample richness is projected to fall by a further 3.4% globally by 2100, with losses concentrated in biodiverse but economically poor countries. Strong mitigation can deliver much more positive biodiversity changes (up to a 1.9% average increase) that are less strongly related to countries' socioeconomic status.

  14. Global effects of land use on local terrestrial biodiversity

    NASA Astrophysics Data System (ADS)

    Newbold, Tim; Hudson, Lawrence N.; Hill, Samantha L. L.; Contu, Sara; Lysenko, Igor; Senior, Rebecca A.; Börger, Luca; Bennett, Dominic J.; Choimes, Argyrios; Collen, Ben; Day, Julie; de Palma, Adriana; Díaz, Sandra; Echeverria-Londoño, Susy; Edgar, Melanie J.; Feldman, Anat; Garon, Morgan; Harrison, Michelle L. K.; Alhusseini, Tamera; Ingram, Daniel J.; Itescu, Yuval; Kattge, Jens; Kemp, Victoria; Kirkpatrick, Lucinda; Kleyer, Michael; Correia, David Laginha Pinto; Martin, Callum D.; Meiri, Shai; Novosolov, Maria; Pan, Yuan; Phillips, Helen R. P.; Purves, Drew W.; Robinson, Alexandra; Simpson, Jake; Tuck, Sean L.; Weiher, Evan; White, Hannah J.; Ewers, Robert M.; Mace, Georgina M.; Scharlemann, Jörn P. W.; Purvis, Andy

    2015-04-01

    Human activities, especially conversion and degradation of habitats, are causing global biodiversity declines. How local ecological assemblages are responding is less clear--a concern given their importance for many ecosystem functions and services. We analysed a terrestrial assemblage database of unprecedented geographic and taxonomic coverage to quantify local biodiversity responses to land use and related changes. Here we show that in the worst-affected habitats, these pressures reduce within-sample species richness by an average of 76.5%, total abundance by 39.5% and rarefaction-based richness by 40.3%. We estimate that, globally, these pressures have already slightly reduced average within-sample richness (by 13.6%), total abundance (10.7%) and rarefaction-based richness (8.1%), with changes showing marked spatial variation. Rapid further losses are predicted under a business-as-usual land-use scenario; within-sample richness is projected to fall by a further 3.4% globally by 2100, with losses concentrated in biodiverse but economically poor countries. Strong mitigation can deliver much more positive biodiversity changes (up to a 1.9% average increase) that are less strongly related to countries' socioeconomic status.

  15. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  16. HAEdb: a novel interactive, locus-specific mutation database for the C1 inhibitor gene.

    PubMed

    Kalmár, Lajos; Hegedüs, Tamás; Farkas, Henriette; Nagy, Melinda; Tordai, Attila

    2005-01-01

    Hereditary angioneurotic edema (HAE) is an autosomal dominant disorder characterized by episodic local subcutaneous and submucosal edema and is caused by the deficiency of the activated C1 esterase inhibitor protein (C1-INH or C1INH; approved gene symbol SERPING1). Published C1-INH mutations are represented in large universal databases (e.g., OMIM, HGMD), but these databases update their data rather infrequently, they are not interactive, and they do not allow searches according to different criteria. The HAEdb, a C1-INH gene mutation database (http://hae.biomembrane.hu) was created to contribute to the following expectations: 1) help the comprehensive collection of information on genetic alterations of the C1-INH gene; 2) create a database in which data can be searched and compared according to several flexible criteria; and 3) provide additional help in new mutation identification. The website uses MySQL, an open-source, multithreaded, relational database management system. The user-friendly graphical interface was written in the PHP web programming language. The website consists of two main parts, the freely browsable search function, and the password-protected data deposition function. Mutations of the C1-INH gene are divided in two parts: gross mutations involving DNA fragments >1 kb, and micro mutations encompassing all non-gross mutations. Several attributes (e.g., affected exon, molecular consequence, family history) are collected for each mutation in a standardized form. This database may facilitate future comprehensive analyses of C1-INH mutations and also provide regular help for molecular diagnostic testing of HAE patients in different centers.

  17. Sociodemographic Disparities in Local Smoke-Free Law Coverage in 10 States.

    PubMed

    Huang, Jidong; King, Brian A; Babb, Stephen D; Xu, Xin; Hallett, Cynthia; Hopkins, Maggie

    2015-09-01

    We assessed sociodemographic disparities in local 100% smoke-free laws prohibiting smoking in all indoor areas of nonhospitality worksites, restaurants, and bars in 10 states. We obtained data on local 100% smoke-free laws (US Tobacco Control Laws Database) and subcounty characteristics (2006-2010 American Community Survey) for Alabama, Alaska, Indiana, Kentucky, Mississippi, Missouri, North Dakota, South Carolina, Texas, and West Virginia. Outcomes included (1) 100% smoke-free law covering restaurants, bars, and workplaces; (2) 100% smoke-free law covering restaurants, bars, or workplaces; and (3) number of venue types covered by 100% smoke-free laws (0-3). Sociodemographics included total population, urban status, percentage racial/ethnic minority, per capita income, percentage with high-school diploma, percentage with blue-collar jobs, and percentage of workers who live and work in the same locality. Across states, localities with less-educated residents, smaller proportions of workers living and working in the same locality, or both generally had lower odds of being covered by 100% smoke-free laws. Coverage varied across states for other sociodemographics. Disparities exist in local smoke-free law coverage. Identifying patterns in coverage can inform state efforts to address related disparities.

  18. Sociodemographic Disparities in Local Smoke-Free Law Coverage in 10 States

    PubMed Central

    Huang, Jidong; Babb, Stephen D.; Xu, Xin; Hallett, Cynthia; Hopkins, Maggie

    2015-01-01

    Objectives. We assessed sociodemographic disparities in local 100% smoke-free laws prohibiting smoking in all indoor areas of nonhospitality worksites, restaurants, and bars in 10 states. Methods. We obtained data on local 100% smoke-free laws (US Tobacco Control Laws Database) and subcounty characteristics (2006–2010 American Community Survey) for Alabama, Alaska, Indiana, Kentucky, Mississippi, Missouri, North Dakota, South Carolina, Texas, and West Virginia. Outcomes included (1) 100% smoke-free law covering restaurants, bars, and workplaces; (2) 100% smoke-free law covering restaurants, bars, or workplaces; and (3) number of venue types covered by 100% smoke-free laws (0–3). Sociodemographics included total population, urban status, percentage racial/ethnic minority, per capita income, percentage with high-school diploma, percentage with blue-collar jobs, and percentage of workers who live and work in the same locality. Results. Across states, localities with less-educated residents, smaller proportions of workers living and working in the same locality, or both generally had lower odds of being covered by 100% smoke-free laws. Coverage varied across states for other sociodemographics. Conclusions. Disparities exist in local smoke-free law coverage. Identifying patterns in coverage can inform state efforts to address related disparities. PMID:26180972

  19. Design and implementation of a database for Brucella melitensis genome annotation.

    PubMed

    De Hertogh, Benoît; Lahlimi, Leïla; Lambert, Christophe; Letesson, Jean-Jacques; Depiereux, Eric

    2008-03-18

    The genome sequences of three Brucella biovars and of some species close to Brucella sp. have become available, leading to new relationship analysis. Moreover, the automatic genome annotation of the pathogenic bacteria Brucella melitensis has been manually corrected by a consortium of experts, leading to 899 modifications of start sites predictions among the 3198 open reading frames (ORFs) examined. This new annotation, coupled with the results of automatic annotation tools of the complete genome sequences of the B. melitensis genome (including BLASTs to 9 genomes close to Brucella), provides numerous data sets related to predicted functions, biochemical properties and phylogenic comparisons. To made these results available, alphaPAGe, a functional auto-updatable database of the corrected sequence genome of B. melitensis, has been built, using the entity-relationship (ER) approach and a multi-purpose database structure. A friendly graphical user interface has been designed, and users can carry out different kinds of information by three levels of queries: (1) the basic search use the classical keywords or sequence identifiers; (2) the original advanced search engine allows to combine (by using logical operators) numerous criteria: (a) keywords (textual comparison) related to the pCDS's function, family domains and cellular localization; (b) physico-chemical characteristics (numerical comparison) such as isoelectric point or molecular weight and structural criteria such as the nucleic length or the number of transmembrane helix (TMH); (c) similarity scores with Escherichia coli and 10 species phylogenetically close to B. melitensis; (3) complex queries can be performed by using a SQL field, which allows all queries respecting the database's structure. The database is publicly available through a Web server at the following url: http://www.fundp.ac.be/urbm/bioinfo/aPAGe.

  20. Changes in Publication-Based Academic Interest in Local Anesthetics Over the Past 50 Years.

    PubMed

    Vlassakov, Kamen V; Kissin, Igor

    2016-07-01

    To present the history of changes in academic interest in local anesthetics quantitatively. The changes in publication-based academic interest in local anesthetics were assessed using information from the database of PubMed. The assessment was mostly based on the following indices: general popularity index (GPI), representing the proportion of articles on a drug relative to all articles in the field of regional anesthesia, and specific popularity index (SPI), representing the proportion of articles on a drug relative to all articles in one of the four forms of regional anesthesia: local anesthesia, spinal anesthesia, epidural anesthesia, and peripheral nerve blocks. The most important general feature of the changes in publication-based academic interest in local anesthetics for the past 50 years was the concentration of this interest on a very limited number of drugs. By 2010-2014, only three anesthetics demonstrated the GPI value above 4.0: bupivacaine (10.1), lidocaine (10.0), and ropivacaine (4.6). All other local anesthetics had GPI declining mostly to less than 1.0 (2010-2014). The rate of change in publication-based academic interest was very slow in both its increase and decline. The most profound change in publication-based academic interests was caused by the introduction of bupivacaine. During a 20-year period (from 1965-1969 to 1985-1989), bupivacaine's GPI increased from 1.3 to 12.9. A slowly developing concentration of publication-based academic interest on a very limited number of local anesthetics was the dominant feature related to this class of anesthetic agents. Copyright © 2016 Anesthesia History Association. Published by Elsevier Inc. All rights reserved.

  1. The coupling between flame surface dynamics and species mass conservation in premixed turbulent combustion

    NASA Technical Reports Server (NTRS)

    Trouve, A.; Veynante, D.; Bray, K. N. C.; Mantel, T.

    1994-01-01

    Current flamelot models based on a description of the flame surface dynamics require the closure of two inter-related equations: a transport equation for the mean reaction progress variable, (tilde)c, and a transport equation for the flame surface density, Sigma. The coupling between these two equations is investigated using direct numerical simulations (DNS) with emphasis on the correlation between the turbulent fluxes of (tilde)c, bar(pu''c''), and Sigma, (u'')(sub S)Sigma. Two different DNS databases are used in the present work: a database developed at CTR by A. Trouve and a database developed by C. J. Rutland using a different code. Both databases correspond to statistically one-dimensional premixed flames in isotropic turbulent flow. The run parameters, however, are significantly different, and the two databases correspond to different combustion regimes. It is found that in all simulated flames, the correlation between bar(pu''c'') and (u'')(sub S)Sigma is always strong. The sign, however, of the turbulent flux of (tilde)c or Sigma with respect to the mean gradients, delta(tilde)c/delta(x) or delta(Sigma)/delta(x), is case-dependent. The CTR database is found to exhibit gradient turbulent transport of (tilde)c and Sigma, whereas the Rutland DNS features counter-gradient diffusion. The two databases are analyzed and compared using various tools (a local analysis of the flow field near the flame, a classical analysis of the conservation equation for (tilde)(u''c''), and a thin flame theoretical analysis). A mechanism is then proposed to explain the discrepancies between the two databases and a preliminary simple criterion is derived to predict the occurrence of gradient/counter-gradient turbulent diffusion.

  2. Transcriptome analysis in Concholepas concholepas (Gastropoda, Muricidae): mining and characterization of new genomic and molecular markers.

    PubMed

    Cárdenas, Leyla; Sánchez, Roland; Gomez, Daniela; Fuenzalida, Gonzalo; Gallardo-Escárate, Cristián; Tanguy, Arnaud

    2011-09-01

    The marine gastropod Concholepas concholepas, locally known as the "loco", is the main target species of the benthonic Chilean fisheries. Genetic and genomic tools are necessary to study the genome of this species in order to understand the molecular basis of its development, growth, and other key traits to improve the management strategies and to identify local adaptation to prevent loss of biodiversity. Here, we use pyrosequencing technologies to generate the first transcriptomic database from adult specimens of the loco. After trimming, a total of 140,756 Expressed Sequence Tag sequences were achieved. Clustering and assembly analysis identified 19,219 contigs and 105,435 singleton sequences. BlastN analysis showed a significant identity with Expressed Sequence Tags of different gastropod species available in public databases. Similarly, BlastX results showed that only 895 out of the total 124,654 had significant hits and may represent novel genes for marine gastropods. From this database, simple sequence repeat motifs were also identified and a total of 38 primer pairs were designed and tested to assess their potential as informative markers and to investigate their cross-species amplification in different related gastropod species. This dataset represents the first publicly available 454 data for a marine gastropod endemic to the southeastern Pacific coast, providing a valuable transcriptomic resource for future efforts of gene discovery and development of functional markers in other marine gastropods. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Identifying work-related motor vehicle crashes in multiple databases.

    PubMed

    Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J

    2012-01-01

    To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.

  4. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  5. Comprehensive Routing Security Development and Deployment for the Internet

    DTIC Science & Technology

    2015-02-01

    feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for

  6. Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics.

    PubMed

    Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben

    2007-04-01

    This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.

  7. User's manual for the national water information system of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2004-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site-Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying ground-water and sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  8. User's Manual for the National Water Information System of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2005-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site- Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying groundwater and Sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  9. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  10. The Western New York Health Resources Project: developing access to local health information.

    PubMed

    Gray, S A; O'Shea, R; Petty, M E; Loonsk, J

    1998-07-01

    The Western New York Health Resources Project was created to fill a gap in online access to local health information resources describing the health of a defined geographic area. The project sought to identify and describe information scattered among many institutions, agencies, and individuals, and to create a database that would be widely accessible. The project proceeded in three phases with initial phases supported by grant funding. This paper describes the database development and selection of content, and concludes that a national online network of local health data representing the various geographic regions of the United States would contribute to the quality of health care in general.

  11. The Western New York Health Resources Project: developing access to local health information.

    PubMed Central

    Gray, S A; O'Shea, R; Petty, M E; Loonsk, J

    1998-01-01

    The Western New York Health Resources Project was created to fill a gap in online access to local health information resources describing the health of a defined geographic area. The project sought to identify and describe information scattered among many institutions, agencies, and individuals, and to create a database that would be widely accessible. The project proceeded in three phases with initial phases supported by grant funding. This paper describes the database development and selection of content, and concludes that a national online network of local health data representing the various geographic regions of the United States would contribute to the quality of health care in general. PMID:9681168

  12. Tax levy financing for local public health: fiscal allocation, effort, and capacity.

    PubMed

    Riley, William J; Gearin, Kimberly J; Parrotta, Carmen D; Briggs, Jill; Gyllstrom, M Elizabeth

    2013-12-01

    Local health departments (LHDs) rely on a wide variety of funding sources, and the level of financing is associated with both LHD performance in essential public health services and population health outcomes. Although it has been shown that funding sources vary across LHDs, there is no evidence regarding the relationship between fiscal allocation (local tax levy); fiscal effort (tax capacity); and fiscal capacity (community wealth). The purpose of this study is to analyze local tax levy support for LHD funding. Three research questions are addressed: (1) What are tax levy trends in LHD fiscal allocation? (2) What is the role of tax levy in overall LHD financing? and (3) How do local community fiscal capacity and fiscal effort relate to LHD tax levy fiscal allocation? This study focuses on 74 LHDs eligible for local tax levy funding in Minnesota. Funding and expenditure data for 5 years (2006 to 2010) were compiled from four governmental databases, including the Minnesota Department of Health, the State Auditor, the State Demographer, and the Metropolitan Council. Trends in various funding sources and expenditures are described for the time frame of interest. Data were analyzed in 2012. During the 2006-2010 time period, total average LHD per capita expenditures increased 13%, from $50.98 to $57.63. Although the overall tax levy increase in Minnesota was 25%, the local tax levy for public health increased 5.6% during the same period. There is a direct relationship between fiscal effort and LHD expenditures. Local funding reflects LHD community priorities and the relative importance in comparison to funding other local programs with tax dollars. In Minnesota, local tax levy support for local public health services is not keeping pace with local tax support for other local government services. These results raise important questions about the relationship between tax levy resource effort, resource allocation, and fiscal capacity as they relate to public health spending in local communities. © 2013 Published by American Journal of Preventive Medicine on behalf of American Journal of Preventive Medicine.

  13. Efficient Data Mining for Local Binary Pattern in Texture Image Analysis

    PubMed Central

    Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.

    2015-01-01

    Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332

  14. NLSdb-major update for database of nuclear localization signals and nuclear export signals.

    PubMed

    Bernhofer, Michael; Goldberg, Tatyana; Wolf, Silvana; Ahmed, Mohamed; Zaugg, Julian; Boden, Mikael; Rost, Burkhard

    2018-01-04

    NLSdb is a database collecting nuclear export signals (NES) and nuclear localization signals (NLS) along with experimentally annotated nuclear and non-nuclear proteins. NES and NLS are short sequence motifs related to protein transport out of and into the nucleus. The updated NLSdb now contains 2253 NLS and introduces 398 NES. The potential sets of novel NES and NLS have been generated by a simple 'in silico mutagenesis' protocol. We started with motifs annotated by experiments. In step 1, we increased specificity such that no known non-nuclear protein matched the refined motif. In step 2, we increased the sensitivity trying to match several different families with a motif. We then iterated over steps 1 and 2. The final set of 2253 NLS motifs matched 35% of 8421 experimentally verified nuclear proteins (up from 21% for the previous version) and none of 18 278 non-nuclear proteins. We updated the web interface providing multiple options to search protein sequences for NES and NLS motifs, and to evaluate your own signal sequences. NLSdb can be accessed via Rostlab services at: https://rostlab.org/services/nlsdb/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. NeMedPlant: a database of therapeutic applications and chemical constituents of medicinal plants from north-east region of India

    PubMed Central

    Meetei, Potshangbam Angamba; Singh, Pankaj; Nongdam, Potshangbam; Prabhu, N Prakash; Rathore, RS; Vindal, Vaibhav

    2012-01-01

    The North-East region of India is one of the twelve mega biodiversity region, containing many rare and endangered species. A curated database of medicinal and aromatic plants from the regions called NeMedPlant is developed. The database contains traditional, scientific and medicinal information about plants and their active constituents, obtained from scholarly literature and local sources. The database is cross-linked with major biochemical databases and analytical tools. The integrated database provides resource for investigations into hitherto unexplored medicinal plants and serves to speed up the discovery of natural productsbased drugs. Availability The database is available for free at http://bif.uohyd.ac.in/nemedplant/orhttp://202.41.85.11/nemedplant/ PMID:22419844

  16. A reservoir morphology database for the conterminous United States

    USGS Publications Warehouse

    Rodgers, Kirk D.

    2017-09-13

    The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.

  17. Prototype Packaged Databases and Software in Health

    PubMed Central

    Gardenier, Turkan K.

    1980-01-01

    This paper describes the recent demand for packaged databases and software for health applications in light of developments in mini-and micro-computer technology. Specific features for defining prospective user groups are discussed; criticisms generated for large-scale epidemiological data use as a means of replacing clinical trials and associated controls are posed to the reader. The available collaborative efforts for access and analysis of jointly structured health data are stressed, with recommendations for new analytical techniques specifically geared to monitoring data such as the CTSS (Cumulative Transitional State Score) generated for tacking ongoing patient status over time in clinical trials. Examples of graphic display are given from the Domestic Information Display System (DIDS) which is a collaborative multi-agency effort to computerize and make accessible user-specified U.S. and local maps relating to health, environment, socio-economic and energy data.

  18. Enhancing AFLOW Visualization using Jmol

    NASA Astrophysics Data System (ADS)

    Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration

    The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings

  19. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  20. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  1. Development Of New Databases For Tsunami Hazard Analysis In California

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.

    2009-12-01

    The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.

  2. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  3. Morphometry Based on Effective and Accurate Correspondences of Localized Patterns (MEACOLP)

    PubMed Central

    Wang, Hu; Ren, Yanshuang; Bai, Lijun; Zhang, Wensheng; Tian, Jie

    2012-01-01

    Local features in volumetric images have been used to identify correspondences of localized anatomical structures for brain morphometry. However, the correspondences are often sparse thus ineffective in reflecting the underlying structures, making it unreliable to evaluate specific morphological differences. This paper presents a morphometry method (MEACOLP) based on correspondences with improved effectiveness and accuracy. A novel two-level scale-invariant feature transform is used to enhance the detection repeatability of local features and to recall the correspondences that might be missed in previous studies. Template patterns whose correspondences could be commonly identified in each group are constructed to serve as the basis for morphometric analysis. A matching algorithm is developed to reduce the identification errors by comparing neighboring local features and rejecting unreliable matches. The two-sample t-test is finally adopted to analyze specific properties of the template patterns. Experiments are performed on the public OASIS database to clinically analyze brain images of Alzheimer's disease (AD) and normal controls (NC). MEACOLP automatically identifies known morphological differences between AD and NC brains, and characterizes the differences well as the scaling and translation of underlying structures. Most of the significant differences are identified in only a single hemisphere, indicating that AD-related structures are characterized by strong anatomical asymmetry. In addition, classification trials to differentiate AD subjects from NC confirm that the morphological differences are reliably related to the groups of interest. PMID:22540000

  4. Class dependency of fuzzy relational database using relational calculus and conditional probability

    NASA Astrophysics Data System (ADS)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  5. Ologs: a categorical framework for knowledge representation.

    PubMed

    Spivak, David I; Kent, Robert E

    2012-01-01

    In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.

  6. Connecting proteins with drug-like compounds: Open source drug discovery workflows with BindingDB and KNIME

    PubMed Central

    Berthold, Michael R.; Hedrick, Michael P.; Gilson, Michael K.

    2015-01-01

    Today’s large, public databases of protein–small molecule interaction data are creating important new opportunities for data mining and integration. At the same time, new graphical user interface-based workflow tools offer facile alternatives to custom scripting for informatics and data analysis. Here, we illustrate how the large protein-ligand database BindingDB may be incorporated into KNIME workflows as a step toward the integration of pharmacological data with broader biomolecular analyses. Thus, we describe a collection of KNIME workflows that access BindingDB data via RESTful webservices and, for more intensive queries, via a local distillation of the full BindingDB dataset. We focus in particular on the KNIME implementation of knowledge-based tools to generate informed hypotheses regarding protein targets of bioactive compounds, based on notions of chemical similarity. A number of variants of this basic approach are tested for seven existing drugs with relatively ill-defined therapeutic targets, leading to replication of some previously confirmed results and discovery of new, high-quality hits. Implications for future development are discussed. Database URL: www.bindingdb.org PMID:26384374

  7. Ologs: A Categorical Framework for Knowledge Representation

    PubMed Central

    Spivak, David I.; Kent, Robert E.

    2012-01-01

    In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research. PMID:22303434

  8. Effectiveness of local therapy for stage I non-small-cell lung cancer in nonagenarians.

    PubMed

    Arnold, Brian N; Thomas, Daniel C; Rosen, Joshua E; Salazar, Michelle C; Detterbeck, Frank C; Blasberg, Justin D; Boffa, Daniel J; Kim, Anthony W

    2017-09-01

    Stage I non-small-cell lung cancer is potentially curable, yet older patients undergo treatment at lower rates than younger patients. This analysis sought to describe the treatment outcomes of nonagenarians with stage I non-small-cell lung cancer to better guide treatment decisions in this population. The National Cancer DataBase was queried for patients age ≥90 years old with stage I non-small-cell lung cancer (tumors ≤4 cm). Patients were divided into 3 groups: local therapy, other therapy, or no treatment. The primary outcomes were 5-year overall and relative survival. Of the 616 patients identified, 33% (202) were treated with local therapy, 34% (207) were treated with other therapy, and 34% (207) underwent no treatment. Compared with local therapy, overall mortality was significantly higher with no treatment (hazard ratio 2.50, 95% confidence interval, 1.95-3.21) and other therapy (hazard ratio 1.43, 95% confidence interval, 1.11-1.83). The 5-year relative survival was 81% for local therapy, 49% for other therapy, and 32% for no treatment (P < .0001). Nonagenarians managed with local therapy for stage I non-small-cell lung cancer (tumors ≤4 cm) have better overall survival than those receiving other therapy or no treatment and should be considered for treatment with either operation or stereotactic body radiation therapy if able to tolerate treatment. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. 75 FR 56099 - Announcement of the Board of Trustees for the National Environmental Education Foundation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-15

    ... open and effective partnership among Federal, State, and local government, business, industry, academic... diversified media company that owned newspapers, local television stations, database centers and print and...

  10. State and Local Government Publications.

    ERIC Educational Resources Information Center

    Nakata, Yuri; Kopec, Karen

    1980-01-01

    Reviews trends in library programs for state and local government publications and documents the increased interest in microforms and databases. Discussion focuses on publication distribution and control, and efforts to support interstate networking. There are 28 references. (RAA)

  11. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    NASA Astrophysics Data System (ADS)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.

  12. Comparison of School Food Policies and Food Preparation Practices before and after the Local Wellness Policy among Indiana High Schools

    ERIC Educational Resources Information Center

    Seo, Dong-Chul

    2009-01-01

    Background: Federal legislation requires local education agencies or school districts to develop a local wellness policy. No data-based research using a prospective cohort of a representative sample of secondary schools has been conducted to investigate the impact of the local wellness policy. Purpose: To investigate changes in school food…

  13. Review of State Laws Restricting Local Authority to Impose Alcohol Taxes in the United States

    PubMed Central

    Mosher, James F.; Adler, Sabrina S.; Pamukcu, Aysha M.; Treffers, Ryan D.

    2017-01-01

    Objective: Building on the extensive research literature demonstrating that increasing alcohol prices reduces excessive alcohol consumption and related harms, this article presents the results of a 50-state review of local authority to tax alcohol in the United States. Method: Between 2013 and 2015, legal databases and government websites were reviewed to collect and analyze relevant statutes, ordinances, and case law. Results reflect laws in effect as of January 1, 2015. Results: Nineteen states allow local alcohol taxation, although 15 of those have one or more major restrictions on local authority to tax. The types of major restrictions are (a) restrictions on the type of beverage and alcohol content that can be taxed, (b) caps on local alcohol taxes, (c) restrictions on the type of retailer where taxes can be imposed,(a) restrictions on jurisdictions within the state that can levy taxes, and (b) requirements for how tax revenue can be spent. Conclusions: The number and severity of restrictions on local authority to tax alcohol vary across states. Previous research has shown that increases in alcohol taxes can lead to reduced excessive alcohol consumption, which provides public health and economic benefits. Taxes can also provide funds to support local prevention and treatment services. Local alcohol taxes therefore present an important policy opportunity, both in states that restrict local authority and in states where local authority exists but is underused. PMID:28317504

  14. Review of State Laws Restricting Local Authority to Impose Alcohol Taxes in the United States.

    PubMed

    Mosher, James F; Adler, Sabrina S; Pamukcu, Aysha M; Treffers, Ryan D

    2017-03-01

    Building on the extensive research literature demonstrating that increasing alcohol prices reduces excessive alcohol consumption and related harms, this article presents the results of a 50-state review of local authority to tax alcohol in the United States. Between 2013 and 2015, legal databases and government websites were reviewed to collect and analyze relevant statutes, ordinances, and case law. Results reflect laws in effect as of January 1, 2015. Nineteen states allow local alcohol taxation, although 15 of those have one or more major restrictions on local authority to tax. The types of major restrictions are (a) restrictions on the type of beverage and alcohol content that can be taxed, (b) caps on local alcohol taxes, (c) restrictions on the type of retailer where taxes can be imposed, (d) restrictions on jurisdictions within the state that can levy taxes, and (e) requirements for how tax revenue can be spent. The number and severity of restrictions on local authority to tax alcohol vary across states. Previous research has shown that increases in alcohol taxes can lead to reduced excessive alcohol consumption, which provides public health and economic benefits. Taxes can also provide funds to support local prevention and treatment services. Local alcohol taxes therefore present an important policy opportunity, both in states that restrict local authority and in states where local authority exists but is underused.

  15. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  16. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  17. Design and Compilation of a Geodatabase of Existing Salinity Information for the Rio Grande Basin, from the Rio Arriba-Sandoval County Line, New Mexico, to Presidio, Texas, 2010

    USGS Publications Warehouse

    Shah, Sachin D.; Maltby, David R.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, compiled salinity-related water-quality data and information in a geodatabase containing more than 6,000 sampling sites. The geodatabase was designed as a tool for water-resource management and includes readily available digital data sources from the U.S. Geological Survey, U.S. Environmental Protection Agency, New Mexico Interstate Stream Commission, Sustainability of semi-Arid Hydrology and Riparian Areas, Paso del Norte Watershed Council, numerous other State and local databases, and selected databases maintained by the University of Arizona and New Mexico State University. Salinity information was compiled for an approximately 26,000-square-mile area of the Rio Grande Basin from the Rio Arriba-Sandoval County line, New Mexico, to Presidio, Texas. The geodatabase relates the spatial location of sampling sites with salinity-related water-quality data reported by multiple agencies. The sampling sites are stored in a geodatabase feature class; each site is linked by a relationship class to the corresponding sample and results stored in data tables.

  18. pGenN, a gene normalization tool for plant genes and proteins in scientific literature.

    PubMed

    Ding, Ruoyao; Arighi, Cecilia N; Lee, Jung-Youn; Wu, Cathy H; Vijay-Shanker, K

    2015-01-01

    Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9% (Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource.org/iprolink/).

  19. Earthquake-induced ground failures in Italy from a reviewed database

    NASA Astrophysics Data System (ADS)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2013-05-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground-level changes triggered by earthquakes of Mercalli intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (URL: http://www.ceri.uniroma1.it/cn/index.do?id=230&page=55) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the "Sapienza" University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground-level changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  20. High Latitude Energetic Particle Boundaries: The SAMPEX Database

    NASA Astrophysics Data System (ADS)

    Kanekal, S. G.; Baker, D. N.

    2006-11-01

    The size of the polar cap or the open field line region depends, upon the difference in reconnection rates at the dayside between the IMF and the geomagnetic field, and those occurring in the magnetotail. The dayside merging adds flux to the open field region increasing the polar cap size and the magnetic flux in the lobes of the tail, thereby causing energy to be stored in the magnetosphere. Night side reconnection, geomagnetic storms and substorms dissipate this energy removing flux and shrink the polar cap. The dynamics of the polar cap can therefore be useful in the study of the energy dynamics of the magnetosphere. Energetic particles delineate magnetospheric regions, since their motions are governed by the geomagnetic field. Convection and corotation electric fields control the drift of low energy particles whereas magnetic field gradient and curvature are the dominant factors for higher energy (> ~30 keV) particles. High latitude energetic particle boundaries are related to the polar cap and therefore useful in determining the size of the open field line regions We will provide a long database of energetic particle boundaries in the polar regions using instruments aboard SAMPEX, the first of the Small explorer (SMEX) spacecraft. It was launched on July 3, 1992 into a low earth polar orbit. There are four particle detectors, HILT, LICA, PET and MAST on board which point toward the zenith over the poles of the Earth. These detectors measure electrons, protons and ions ranging in energy from tens of keV to a few MeV. This database will comprise the latitudinal (geographic, magnetic and invariant) and longitudinal (geographic and magnetic local time) positions of energetic particle boundaries in the polar regions. The database will cover a time period from launch to about mid 2004. It will therefore cover a significant portion of the solar cycles 22 and 23. Together with interplanetary data obtainable from public databases, such as the NASA OMNI database the SAMPEX energetic particle database can be used to relate Earth's magnetospheric response to the interplanetary drivers such as solar wind speed, density and magnetic field.

  1. ATGC database and ATGC-COGs: an updated resource for micro- and macro-evolutionary studies of prokaryotic genomes and protein family annotation.

    PubMed

    Kristensen, David M; Wolf, Yuri I; Koonin, Eugene V

    2017-01-04

    The Alignable Tight Genomic Clusters (ATGCs) database is a collection of closely related bacterial and archaeal genomes that provides several tools to aid research into evolutionary processes in the microbial world. Each ATGC is a taxonomy-independent cluster of 2 or more completely sequenced genomes that meet the objective criteria of a high degree of local gene order (synteny) and a small number of synonymous substitutions in the protein-coding genes. As such, each ATGC is suited for analysis of microevolutionary variations within a cohesive group of organisms (e.g. species), whereas the entire collection of ATGCs is useful for macroevolutionary studies. The ATGC database includes many forms of pre-computed data, in particular ATGC-COGs (Clusters of Orthologous Genes), multiple sequence alignments, a set of 'index' orthologs representing the most well-conserved members of each ATGC-COG, the phylogenetic tree of the organisms within each ATGC, etc. Although the ATGC database contains several million proteins from thousands of genomes organized into hundreds of clusters (roughly a 4-fold increase since the last version of the ATGC database), it is now built with completely automated methods and will be regularly updated following new releases of the NCBI RefSeq database. The ATGC database is hosted jointly at the University of Iowa at dmk-brain.ecn.uiowa.edu/ATGC/ and the NCBI at ftp.ncbi.nlm.nih.gov/pub/kristensen/ATGC/atgc_home.html. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  2. "Go local" island food network: using email networking to promote island foods for their health, biodiversity, and other "CHEEF" benefits.

    PubMed

    Englberger, L; Lorens, A; Pretrick, M E; Spegal, R; Falcam, I

    2010-04-01

    Dietary- and lifestyle-related diseases are problems of epidemic proportion in the Federated States of Micronesia (FSM). Public health resources to help prevent nutrition-related problems are limited. There is also concern about biodiversity, neglect of traditional staple foods, and threatened loss of traditional knowledge. A "Go Local" campaign was initiated to increase production and consumption of locally grown foods, for their Culture, Health, Environment, Economics, and Food security ("CHEEF") benefits. To provide updates and discuss local island food topics, the Island Food Community of Pohnpei launched an interagency email network in 2003. Interested members' email addresses were recorded in distribution lists, weekly/bi-weekly emails were sent and from these messages, a database was organized to record email topic details. An analysis of all emails up to July 2009 showed that membership had expanded to over 600 listed people from all FSM states, other Pacific Island countries and beyond. Information was shared on topics ranging from scientific findings of carotenoid content in local island food cultivars, to discussions on how daily habits related to island food use can be improved. Over 200 men and women, aged 22 to 80 years, contributed items, some indicating that they had shared emails to a further network at their workplace or community. In conclusion, this email network is a simple, cost-effective method to share information, create awareness, and mobilize island food promotion efforts with potential for providing health, biodiversity and other benefits of island foods to populations in the FSM and other countries.

  3. searchSCF: Using MongoDB to Enable Richer Searches of Locally Hosted Science Data Repositories

    NASA Astrophysics Data System (ADS)

    Knosp, B.

    2016-12-01

    Science teams today are in the unusual position of almost having too much data available to them. Modern sensors and models are capable of outputting terabytes of data per day, which can make it difficult to find specific subsets of data. The sheer size of files can also make it time consuming to retrieve this big data from national data archive centers. Thus, many science teams choose to store what data they can on their local systems, but they are not always equipped with tools to help them intelligently organize and search their data. In its local data repository, the Aura Microwave Limb Sounder (MLS) science team at NASA's Jet Propulsion Laboratory has collected over 300TB of atmospheric science data from 71 missions/models that aid in validation, algorithm development, and research activities. When the project began, the team developed a MySQL database to aid in data queries, but this database was only designed to keep track of MLS and a few ancillary data sets, leving much of the data uncatalogued. The team has also seen database query time rise over the life of the mission. Even though the MLS science team's data holdings are not the size of a national data center's, team members still need tools to help them discover and utilize the data that they have on-hand. Over the past year, members of the science team have been looking for solutions to (1) store information on all the data sets they have collected in a single database, (2) store more metadata about each data file, (3) develop queries that can find relationships among these disparate data types, and (4) plug any new functions developed around this database into existing analysis, visualization, and web tools, transparently to users. In this presentation, I will discuss the searchSCF package that is currently under development. This package includes a NoSQL database management system (MongoDB) and a set of Python tools that both ingests data into the database and supports user queries. I will also highlight case studies of how this system could be used by the MLS science team, and how it could be implemented by other science teams with local data repositories.

  4. Analysis and Design of a Distributed System for Management and Distribution of Natural Language Assertions

    DTIC Science & Technology

    2010-09-01

    5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD

  5. Childhood immunization rates in rural Intibucá, Honduras: an analysis of a local database tool and community health center records for assessing and improving vaccine coverage.

    PubMed

    He, Yuan; Zarychta, Alan; Ranz, Joseph B; Carroll, Mary; Singleton, Lori M; Wilson, Paria M; Schlaudecker, Elizabeth P

    2012-12-07

    Vaccines are highly effective at preventing infectious diseases in children, and prevention is especially important in resource-limited countries where treatment is difficult to access. In Honduras, the World Health Organization (WHO) reports very high immunization rates in children. To determine whether or not these estimates accurately depict the immunization coverage in non-urban regions of the country, we compared the WHO data to immunization rates obtained from a local database tool and community health center records in rural Intibucá, Honduras. We used data from two sources to comprehensively evaluate immunization rates in the area: 1) census data from a local database and 2) immunization data collected at health centers. We compared these rates using logistic regression, and we compared them to publicly available WHO-reported estimates using confidence interval inclusion. We found that mean immunization rates for each vaccine were high (range 84.4 to 98.8 percent), but rates recorded at the health centers were significantly higher than those reported from the census data (p ≤ 0.001). Combining the results from both databases, the mean rates of four out of five vaccines were less than WHO-reported rates (p <0.05). Overall immunization rates were significantly different between townships (p=0.03). The rates by individual vaccine were similar across townships (p >0.05), except for diphtheria/tetanus/pertussis vaccine (p=0.02) and oral polio vaccine (p <0.01). Immunization rates in Honduras were high across data sources, though most of the rates recorded in rural Honduras were less than WHO-reported rates. Despite geographical difficulties and barriers to access, the local database and Honduran community health workers have developed a thorough system for ensuring that children receive their immunizations on time. The successful integration of community health workers and a database within the Honduran decentralized health system may serve as a model for other immunization programs in resource-limited countries where health care is less accessible.

  6. Childhood immunization rates in rural Intibucá, Honduras: an analysis of a local database tool and community health center records for assessing and improving vaccine coverage

    PubMed Central

    2012-01-01

    Background Vaccines are highly effective at preventing infectious diseases in children, and prevention is especially important in resource-limited countries where treatment is difficult to access. In Honduras, the World Health Organization (WHO) reports very high immunization rates in children. To determine whether or not these estimates accurately depict the immunization coverage in non-urban regions of the country, we compared the WHO data to immunization rates obtained from a local database tool and community health center records in rural Intibucá, Honduras. Methods We used data from two sources to comprehensively evaluate immunization rates in the area: 1) census data from a local database and 2) immunization data collected at health centers. We compared these rates using logistic regression, and we compared them to publicly available WHO-reported estimates using confidence interval inclusion. Results We found that mean immunization rates for each vaccine were high (range 84.4 to 98.8 percent), but rates recorded at the health centers were significantly higher than those reported from the census data (p≤0.001). Combining the results from both databases, the mean rates of four out of five vaccines were less than WHO-reported rates (p <0.05). Overall immunization rates were significantly different between townships (p=0.03). The rates by individual vaccine were similar across townships (p >0.05), except for diphtheria/tetanus/pertussis vaccine (p=0.02) and oral polio vaccine (p <0.01). Conclusions Immunization rates in Honduras were high across data sources, though most of the rates recorded in rural Honduras were less than WHO-reported rates. Despite geographical difficulties and barriers to access, the local database and Honduran community health workers have developed a thorough system for ensuring that children receive their immunizations on time. The successful integration of community health workers and a database within the Honduran decentralized health system may serve as a model for other immunization programs in resource-limited countries where health care is less accessible. PMID:23216801

  7. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  8. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  9. [Informatics support for risk assessment and identification of preventive measures in small and micro-enterprises: occupational hazard datasheets].

    PubMed

    de Merich, D; Forte, Giulia

    2011-01-01

    Risk assessment is the fundamental process of an enterprise's prevention system and is the principal mandatory provision contained in the Health and Safety Law (Legislative Decree 81/2008) amended by Legislative Decree 106/2009. In order to properly comply with this obligation also in small-sized enterprises, the appropriate regulatory bodies should provide the enterprises with standardized tools and methods for identifying, assessing and managing risks. To assist in particular small and micro-enterprises (SMEs) with risk assessment, by providing a flexible tool that can also be standardized in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. Official efforts to provide Italian SMEs with information may initially make use of the findings of research conducted by ISPESL over the past 20 years, thanks in part to cooperation with other institutions (Regions, INAIL-National Insurance Institute for Occupational Accidents and Diseases), which have led to the creation of an information system on prevention consisting of numerous databases, both statistical and documental ("National System of Surveillance on fatal and serious accidents", "National System of Surveillance on work-related diseases", "Sector hazard profiles" database, "Solutions and Best Practices" database, "Technical Guidelines" database, "Training packages for prevention professionals in enterprises" database). With regard to evaluation criteria applicable within the enterprise, the possibility of combining traditional and uniform areas of assessment (by sector or by risk factor) with assessments by job/occupation has become possible thanks to the cooperation agreement made in 2009 by ISPESL, the ILO (International Labour Organisation) of Geneva and IIOSH (Israel Institute for Occupational Health and Hygiene) regarding the creation of an international Database (HDODB) based on risk datasheets per occupation. The project sets out to assist in particular small and micro-enterprises with risk assessment, providing a flexible and standardized tool in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. The model proposed by ISPESL selected the ILO's "Hazard Datasheet on Occupation" as an initial information tool to steer efforts to assess and manage hazards in small and micro-enterprises. In addition to being an internationally validated tool, the occupation datasheet has a very simple structure that is very effective in communicating and updating information in relation to the local context. According to the logic based on the providing support to enterprises by means of a collaborative network among institutions, local supervisory services and social partners, standardised hazard assessment procedures should be, irrespective of any legal obligations, the preferred tools of an "updatable information system" capable of providing support for the need to improve the process of assessing and managing hazards in enterprises.

  10. RF-Based Location Using Interpolation Functions to Reduce Fingerprint Mapping

    PubMed Central

    Ezpeleta, Santiago; Claver, José M.; Pérez-Solano, Juan J.; Martí, José V.

    2015-01-01

    Indoor RF-based localization using fingerprint mapping requires an initial training step, which represents a time consuming process. This location methodology needs a database conformed with RSSI (Radio Signal Strength Indicator) measures from the communication transceivers taken at specific locations within the localization area. But, the real world localization environment is dynamic and it is necessary to rebuild the fingerprint database when some environmental changes are made. This paper explores the use of different interpolation functions to complete the fingerprint mapping needed to achieve the sought accuracy, thereby reducing the effort in the training step. Also, different distributions of test maps and reference points have been evaluated, showing the validity of this proposal and necessary trade-offs. Results reported show that the same or similar localization accuracy can be achieved even when only 50% of the initial fingerprint reference points are taken. PMID:26516862

  11. Using LUCAS topsoil database to estimate soil organic carbon content in local spectral libraries

    NASA Astrophysics Data System (ADS)

    Castaldi, Fabio; van Wesemael, Bas; Chabrillat, Sabine; Chartin, Caroline

    2017-04-01

    The quantification of the soil organic carbon (SOC) content over large areas is mandatory to obtain accurate soil characterization and classification, which can improve site specific management at local or regional scale exploiting the strong relationship between SOC and crop growth. The estimation of the SOC is not only important for agricultural purposes: in recent years, the increasing attention towards global warming highlighted the crucial role of the soil in the global carbon cycle. In this context, soil spectroscopy is a well consolidated and widespread method to estimate soil variables exploiting the interaction between chromophores and electromagnetic radiation. The importance of spectroscopy in soil science is reflected by the increasing number of large soil spectral libraries collected in the world. These large libraries contain soil samples derived from a consistent number of pedological regions and thus from different parent material and soil types; this heterogeneity entails, in turn, a large variability in terms of mineralogical and organic composition. In the light of the huge variability of the spectral responses to SOC content and composition, a rigorous classification process is necessary to subset large spectral libraries and to avoid the calibration of global models failing to predict local variation in SOC content. In this regard, this study proposes a method to subset the European LUCAS topsoil database into soil classes using a clustering analysis based on a large number of soil properties. The LUCAS database was chosen to apply a standardized multivariate calibration approach valid for large areas without the need for extensive field and laboratory work for calibration of local models. Seven soil classes were detected by the clustering analyses and the samples belonging to each class were used to calibrate specific partial least square regression (PLSR) models to estimate SOC content of three local libraries collected in Belgium (Loam belt and Wallonia) and Luxembourg. The three local libraries only consist of spectral data (199 samples) acquired using the same protocol as the one used for the LUCAS database. SOC was estimated with a good accuracy both within each local library (RMSE: 1.2 ÷ 5.4 g kg-1; RPD: 1.41 ÷ 2.06) and for the samples of the three libraries together (RMSE: 3.9 g kg-1; RPD: 2.47). The proposed approach could allow to estimate SOC everywhere in Europe only collecting spectra, without the need for chemical laboratory analyses, exploiting the potentiality of the LUCAS database and specific PLSR models.

  12. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  13. Neuroinformatics Database (NiDB) – A Modular, Portable Database for the Storage, Analysis, and Sharing of Neuroimaging Data

    PubMed Central

    Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.

    2013-01-01

    We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507

  14. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  15. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  16. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  17. Association between local government social expenditures and mortality levels in Korea.

    PubMed

    Ko, Hansoo; Kim, Jinseob; Kim, Donggil; Kim, Saerom; Park, Yukyung; Kim, Chang-yup

    2013-01-01

    We examined the association between social expenditures of the local government and the mortality level in Korea, 2004 to 2010. We used social expenditure data of 230 local governments during 2004 to 2010 from the Social Expenditure Database prepared by the Korean Institute for Health and Social Affairs. Fixed effect panel data regression analysis was adopted to look for associations between social expenditures and age-standardized mortality and the premature death index. Social expenditures of local governments per capita was not significantly associated with standardized mortality but was associated with the premature death index (decline of 1.0 [for males] and 0.5 [for females] for each expenditure of 100 000 Korean won, i.e., approximately 100 US dollar). As an index of the voluntary effort of local governments, the self-managed project ratio was associated with a decline in the standardized mortality in females (decline of 0.4 for each increase of 1%). The share of health care was not significant. There were associations between social expenditures of the local government and the mortality level in Korea. In particular, social expenditures per capita were significantly associated with a decline in premature death. However, the voluntary efforts of local governments were not significantly related to the decline in premature death.

  18. The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts

    PubMed Central

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2014-01-01

    Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project – and avert – future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups – including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems – http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015. PMID:25558364

  19. The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts.

    PubMed

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2014-12-01

    Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species' threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project - and avert - future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups - including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems - http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.

  20. CRETACEOUS CLIMATE SENSITIVITY STUDY USING DINOSAUR & PLANT PALEOBIOGEOGRAPHY

    NASA Astrophysics Data System (ADS)

    Goswami, A.; Main, D. J.; Noto, C. R.; Moore, T. L.; Scotese, C.

    2009-12-01

    The Early Cretaceous was characterized by cool poles and moderate global temperatures (~16° C). During the mid and late Cretaceous, long-term global warming (~20° - 22° C) was driven by increasing levels of CO2, rising sea level (lowering albedo) and the continuing breakup of Pangea. Paleoclimatic reconstructions for four time intervals during the Cretaceous: Middle Campanian (80 Ma), Cenomanian/Turonian (90 Ma), Early Albian (110 Ma) and Barremian-Hauterivian (130Ma) are presented here. These paleoclimate simulations were prepared using the Fast Ocean and Atmosphere Model (FOAM). The simulated results show the pattern of the pole-to-Equator temperature gradients, rainfall, surface run-off, the location of major rivers and deltas. In order to investigate the effect of potential dispersal routes on paleobiogeographic patterns, a time-slice series of maps from Early - Late Cretaceous were produced showing plots of dinosaur and plant fossil distributions. These Maps were created utilizing: 1) plant fossil localities from the GEON and Paleobiology (PBDB) databases; and 2) dinosaur fossil localities from an updated version of the Dinosauria (Weishampel, 2004) database. These results are compared to two different types of datasets, 1) Paleotemperature database for the Cretaceous and 2) locality data obtained from GEON, PBDB and Dinosauria database. Global latitudinal mean temperatures from both the model and the paelotemperature database were plotted on a series of latitudinal graphs along with the distributions of fossil plants and dinosaurs. It was found that most dinosaur localities through the Cretaceous tend to cluster within specific climate belts, or envelopes. Also, these Cretaceous maps show variance in biogeographic zonation of both plants and dinosaurs that is commensurate with reconstructed climate patterns and geography. These data are particularly useful for understanding the response of late Mesozoic ecosystems to geographic and climatic conditions that differed markedly from the present. Studies of past biotas and their changes may elucidate the role of climatic and geographic factors in driving changes in species distributions, ecosystem organization, and evolutionary dynamics over time.

  1. 78 FR 25095 - Notice of an Extension of an Information Collection (1028-0092)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... the development of The National Map and other national geospatial databases. In FY 2010, projects for... including elevation, orthoimagery, hydrography and other layers in the national databases may be possible. We will accept applications from State, local or tribal governments and academic institutions to...

  2. Downscaling climate information for local disease mapping.

    PubMed

    Bernardi, M; Gommes, R; Grieser, J

    2006-06-01

    The study of the impacts of climate on human health requires the interdisciplinary efforts of health professionals, climatologists, biologists, and social scientists to analyze the relationships among physical, biological, ecological, and social systems. As the disease dynamics respond to variations in regional and local climate, climate variability affects every region of the world and the diseases are not necessarily limited to specific regions, so that vectors may become endemic in other regions. Climate data at local level are thus essential to evaluate the dynamics of vector-borne disease through health-climate models and most of the times the climatological databases are not adequate. Climate data at high spatial resolution can be derived by statistical downscaling using historical observations but the method is limited by the availability of historical data at local level. Since the 90s', the statistical interpolation of climate data has been an important priority of the Agrometeorology Group of the Food and Agriculture Organization of the United Nations (FAO), as they are required for agricultural planning and operational activities at the local level. Since 1995, date of the first FAO spatial interpolation software for climate data, more advanced applications have been developed such as SEDI (Satellite Enhanced Data Interpolation) for the downscaling of climate data, LOCCLIM (Local Climate Estimator) and the NEW_LOCCLIM in collaboration with the Deutscher Wetterdienst (German Weather Service) to estimate climatic conditions at locations for which no observations are available. In parallel, an important effort has been made to improve the FAO climate database including at present more than 30,000 stations worldwide and expanding the database from developing countries coverage to global coverage.

  3. Local media monitoring in process evaluation. Experiences from the Stockholm Diabetes Prevention Programme.

    PubMed

    Andersson, Camilla Maria; Bjärås, Gunilla; Tillgren, Per; Ostenson, Claes-Göran

    2007-01-01

    We present a rationale and approach for longitudinal analyses of media coverage and content, and illustrate how media monitoring can be used in process evaluations. Within a community-based diabetes prevention project, the Stockholm Diabetes Prevention Program, we analyzed the frequency, prominence, and framing of physical activity in local newspapers of three intervention and two control municipalities. In total, 2,128 stories and advertisements related to physical activity were identified between the years 1997 and 2002. Although stories about physical activity were relatively few (n = 224), they were prominently located in all five local newspapers. Physical activity was framed rather similarly in the municipalities. Health aspects, however, were expressed to a greater extent in stories in two of the intervention municipalities. A limited portion (14%) of the articles could be linked directly to the program. It is not possible to assess to what extent the program has had a disseminating effect on the newspapers' health-related content in general, due to weaknesses of the process tracking system and limitations of the study design. Implications for the design is that an evaluative framework should be preplanned and include data collection about media relationships, media's interest in public health, media coverage prior to the program and coverage in other media for comparisons of general trends in the reporting. The material and the current database, however, provide a good basis for quantitative content analysis and qualitative discourse analysis to yield information on the type, frequency, and content of health reporting in local newspapers.

  4. Modelling and approaching pragmatic interoperability of distributed geoscience data

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang

    2010-05-01

    Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location, intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.

  5. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  6. Local Community Verification of Coastal Erosion Risks in the Arctic: Insights from Alaska's North Slope

    NASA Astrophysics Data System (ADS)

    Brady, M.

    2016-12-01

    During his historic trip to Alaska in 2015, U.S. President Barack Obama announced a collaborative effort to update maps of the Arctic region in anticipation of increased maritime access and resource development and to support climate resilience. Included in this effort is development of an Arctic-wide satellite-based digital elevation model (DEM) to provide a baseline to monitor landscape change such as coastal erosion. Focusing in Alaska's North Slope, an objective of this study is to transform emerging Arctic environment spatial data products including the new DEM into information that can support local level planning and decision-making in the face of extreme coastal erosion and related environmental threats. In pursuit of this, in 2016, 4 workshops were held in three North Slope villages highly exposed to coastal erosion. The first workshop with approximately 10 managers in Barrow solicited feedback on an erosion risk database developed in a previous research stage and installed onto the North Slope's planning Web portal. The database includes a physical risk indicator based on factors such as historical erosion and effects of sea ice loss summarized at asset locations. After a demonstration of the database, participants discussed usability aspects such as data reliability. The focus of the mapping workshops in Barrow and two smaller villages Wainwright and Kaktovik was to verify and expand the risk database by interactively mapping erosion observations and community asset impacts. Using coded stickers and paper maps of the shoreline showing USGS erosion rates, a total of 50 participants provided feedback on erosion data accuracy. Approximately 25 of the total 50 participants were elders and hunters who also provided in-depth community risk information. The workshop with managers confirmed physical risk factors used in the risk database, and revealed that the information may be relied upon to support some development decisions and better engage developers about erosion risks. Results from the three mapping workshops revealed that most participants agree that the USGS data are consistent with their observations. Also, in-depth contributions from elders and hunters confirmed that there is a need to monitor loss of specific assets including hunting grounds and historic places and associated community impacts.

  7. Texture-based approach to palmprint retrieval for personal identification

    NASA Astrophysics Data System (ADS)

    Li, Wenxin; Zhang, David; Xu, Z.; You, J.

    2000-12-01

    This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.

  8. Texture-based approach to palmprint retrieval for personal identification

    NASA Astrophysics Data System (ADS)

    Li, Wenxin; Zhang, David; Xu, Z.; You, J.

    2001-01-01

    This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.

  9. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  10. Toward a Bio-Medical Thesaurus: Building the Foundation of the UMLS

    PubMed Central

    Tuttle, Mark S.; Blois, Marsden S.; Erlbaum, Mark S.; Nelson, Stuart J.; Sherertz, David D.

    1988-01-01

    The Unified Medical Language System (UMLS) is being designed to provide a uniform user interface to heterogeneous machine-readable bio-medical information resources, such as bibliographic databases, genetic databases, expert systems and patient records.1 Such an interface will have to recognize different ways of saying the same thing, and provide links to ways of saying related things. One way to represent the necessary associations is via a domain thesaurus. As no such thesaurus exists, and because, once built, it will be both sizable and in need of continuous maintenance, its design should include a methodology for building and maintaining it. We propose a methodology, utilizing lexically expanded schema inversion, and a design, called T. Lex, which together form one approach to the problem of defining and building a bio-medical thesaurus. We argue that the semantic locality implicit in such a thesaurus will support model-based reasoning in bio-medicine.2

  11. Assistive technology for ultrasound-guided central venous catheter placement.

    PubMed

    Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana

    2018-01-01

    This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.

  12. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    USGS Publications Warehouse

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the ENGDA database field name and relational database table is designated; along with the ENGDA screen entry form(s) and the ENGDA field form (attachment 2). The database dictionary is separated into sections. The first section, Basic Site Data Fields, describes the basic site information that is similar for all of the different types of sites. The remaining sections may be applicable for only one type of site; for example, the Well Drilling and Construction Data Fields and Lithologic Description Data Fields are applicable to boreholes and not to springs. Attachment 1 contains a table for conversion from English to metric units. Attachment 2 contains selected field forms used in conjunction with ENGDA. A separate document, 'Users Reference Manual for the Ethiopian National Ground-Water DAtabase (ENGDA),' by David W. Litke was developed as a users guide for the computer database and screen entry. This database dictionary serves as a reference for both the field forms and the computer database. Every effort has been made to have identical field names between the field forms and the screen entry forms in order to avoid confusion.

  13. RExPrimer: an integrated primer designing tool increases PCR effectiveness by avoiding 3' SNP-in-primer and mis-priming from structural variation

    PubMed Central

    2009-01-01

    Background Polymerase chain reaction (PCR) is very useful in many areas of molecular biology research. It is commonly observed that PCR success is critically dependent on design of an effective primer pair. Current tools for primer design do not adequately address the problem of PCR failure due to mis-priming on target-related sequences and structural variations in the genome. Methods We have developed an integrated graphical web-based application for primer design, called RExPrimer, which was written in Python language. The software uses Primer3 as the primer designing core algorithm. Locally stored sequence information and genomic variant information were hosted on MySQLv5.0 and were incorporated into RExPrimer. Results RExPrimer provides many functionalities for improved PCR primer design. Several databases, namely annotated human SNP databases, insertion/deletion (indel) polymorphisms database, pseudogene database, and structural genomic variation databases were integrated into RExPrimer, enabling an effective without-leaving-the-website validation of the resulting primers. By incorporating these databases, the primers reported by RExPrimer avoid mis-priming to related sequences (e.g. pseudogene, segmental duplication) as well as possible PCR failure because of structural polymorphisms (SNP, indel, and copy number variation (CNV)). To prevent mismatching caused by unexpected SNPs in the designed primers, in particular the 3' end (SNP-in-Primer), several SNP databases covering the broad range of population-specific SNP information are utilized to report SNPs present in the primer sequences. Population-specific SNP information also helps customize primer design for a specific population. Furthermore, RExPrimer offers a graphical user-friendly interface through the use of scalable vector graphic image that intuitively presents resulting primers along with the corresponding gene structure. In this study, we demonstrated the program effectiveness in successfully generating primers for strong homologous sequences. Conclusion The improvements for primer design incorporated into RExPrimer were demonstrated to be effective in designing primers for challenging PCR experiments. Integration of SNP and structural variation databases allows for robust primer design for a variety of PCR applications, irrespective of the sequence complexity in the region of interest. This software is freely available at http://www4a.biotec.or.th/rexprimer. PMID:19958502

  14. Mapping the impacts of thermoelectric power generation: a global, spatially explicit database

    NASA Astrophysics Data System (ADS)

    Raptis, Catherine; Pfister, Stephan

    2017-04-01

    Thermoelectric power generation is associated with environmental pressures resulting from emissions to air and water, as well as water consumption. The need to achieve global coverage in related studies has become pressing in view of climate change. At the same time, the ability to quantify impacts from power production on a high resolution remains pertinent, given their highly regionalized nature, particularly when it comes to water-related impacts. Efforts towards global coverage have increased in recent years, but most work on the impacts of global electricity production presents a coarse geographical differentiation. Over the past few years we have begun a concerted effort to create and make available a global georeferenced inventory of thermoelectric power plant operational characteristics and emissions, by modelling the relevant processes on the highest possible level: that of a generating unit. Our work extends and enhances a commercially available global power plant database, and so far includes: - Georeferencing the generating units and populating the gaps in their steam properties. - Identifying the cooling system for 92% of the global installed thermoelectric power capacity. - Using the completed steam property data, along with local environmental temperature data, to systematically solve the Rankine cycle for each generating unit, involving: i) distinguishing between simple, reheat, and cogenerative cycles, and accounting for particularities in nuclear power cycles; ii) accounting for the effect of different cooling systems (once-through, recirculating (wet tower), dry cooling) on the thermodynamic cycle. One of the direct outcomes of solving the Rankine cycle is the cycle efficiency, an indispensable parameter in any study related to power production, including the quantification of air emissions and water consumption. Another direct output, for those units employing once-through cooling, is the rate of heat rejection to water, which can lead to thermal pollution. The opportunities afforded by the creation of this comprehensive database are numerous, including its use in integrated studies of electricity production and environmental burden, on local or global scales. The quantification, on the highest possible geographical and technological resolution, of all the different current impacts caused by thermoelectric power generation is crucial in order to conduct a proper assessment of the trade-offs in impacts in future scenario studies including technological changes, and to avoid burden-shifting. Here, we present the progress made in the building of the database so far, as well as the results of its application in a worldwide study of the thermal stress of rivers from the heat rejected by power plants using once-through cooling systems.

  15. Analysis of rockbolt performance at the Waste Isolation Pilot Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terrill, L.J.; Francke, C.T.; Saeb, S.

    Rockbolt failures at the Waste Isolation Pilot Plant have been recorded since 1990 and are categorized in terms of mode of failure. The failures are evaluated in terms of physical location of installation within the mine, local excavation geometry and stratigraphy, proximity to other excavations or shafts, and excavation age. The database of failures has revealed discrete ares of the mine containing relatively large numbers of failures. The results of metallurgical analyses and standard rockbolt load testing have generally been in agreement with the in situ evaluations.

  16. A Decade’s Experience With Quality Improvement in Cardiac Surgery Using the Veterans Affairs and Society of Thoracic Surgeons National Databases

    PubMed Central

    Grover, Frederick L.; Shroyer, A. Laurie W.; Hammermeister, Karl; Edwards, Fred H.; Ferguson, T. Bruce; Dziuban, Stanley W.; Cleveland, Joseph C.; Clark, Richard E.; McDonald, Gerald

    2001-01-01

    Objective To review the Department of Veteran Affairs (VA) and the Society of Thoracic Surgeons (STS) national databases over the past 10 years to evaluate their relative similarities and differences, to appraise their use as quality improvement tools, and to assess their potential to facilitate improvements in quality of cardiac surgical care. Summary Background Data The VA developed a mandatory risk-adjusted database in 1987 to monitor outcomes of cardiac surgery at all VA medical centers. In 1989 the STS developed a voluntary risk-adjusted database to help members assess quality and outcomes in their individual programs and to facilitate improvements in quality of care. Methods A short data form on every veteran operated on at each VA medical center is completed and transmitted electronically for analysis of unadjusted and risk-adjusted death and complications, as well as length of stay. Masked, confidential semiannual reports are then distributed to each program’s clinical team and the associated administrator. These reports are also reviewed by a national quality oversight committee. Thus, VA data are used both locally for quality improvement and at the national level with quality surveillance. The STS dataset (217 core fields and 255 extended fields) is transmitted for each patient semiannually to the Duke Clinical Research Institute (DCRI) for warehousing, analysis, and distribution. Site-specific reports are produced with regional and national aggregate comparisons for unadjusted and adjusted surgical deaths and complications, as well as length of stay for coronary artery bypass grafting (CABG), valvular procedures, and valvular/CABG procedures. Both databases use the logistic regression modeling approach. Data for key processes of care are also captured in both databases. Research projects are frequently carried out using each database. Results More than 74,000 and 1.6 million cardiac surgical patients have been entered into the VA and STS databases, respectively. Risk factors that predict surgical death for CABG are very similar in the two databases, as are the odds ratios for most of the risk factors. One major difference is that the VA is 99% male, the STS 71% male. Both databases have shown a significant reduction in the risk-adjusted surgical death rate during the past decade despite the fact that patients have presented with an increased risk factor profile. The ratio of observed to expected deaths decreased from 1.05 to 0.9 for the VA and from 1.5 to 0.9 for the STS. Conclusion It appears that the routine feedback of risk-adjusted data on local performance provided by these programs heightens awareness and leads to self-examination and self-assessment, which in turn improves quality and outcomes. This general quality improvement template should be considered for application in other settings beyond cardiac surgery. PMID:11573040

  17. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  18. CanvasDB: a local database infrastructure for analysis of targeted- and whole genome re-sequencing projects

    PubMed Central

    Ameur, Adam; Bunikis, Ignas; Enroth, Stefan; Gyllensten, Ulf

    2014-01-01

    CanvasDB is an infrastructure for management and analysis of genetic variants from massively parallel sequencing (MPS) projects. The system stores SNP and indel calls in a local database, designed to handle very large datasets, to allow for rapid analysis using simple commands in R. Functional annotations are included in the system, making it suitable for direct identification of disease-causing mutations in human exome- (WES) or whole-genome sequencing (WGS) projects. The system has a built-in filtering function implemented to simultaneously take into account variant calls from all individual samples. This enables advanced comparative analysis of variant distribution between groups of samples, including detection of candidate causative mutations within family structures and genome-wide association by sequencing. In most cases, these analyses are executed within just a matter of seconds, even when there are several hundreds of samples and millions of variants in the database. We demonstrate the scalability of canvasDB by importing the individual variant calls from all 1092 individuals present in the 1000 Genomes Project into the system, over 4.4 billion SNPs and indels in total. Our results show that canvasDB makes it possible to perform advanced analyses of large-scale WGS projects on a local server. Database URL: https://github.com/UppsalaGenomeCenter/CanvasDB PMID:25281234

  19. Molecular Genetics Information System (MOLGENIS): alternatives in developing local experimental genomics databases.

    PubMed

    Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C

    2004-09-01

    Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl

  20. CanvasDB: a local database infrastructure for analysis of targeted- and whole genome re-sequencing projects.

    PubMed

    Ameur, Adam; Bunikis, Ignas; Enroth, Stefan; Gyllensten, Ulf

    2014-01-01

    CanvasDB is an infrastructure for management and analysis of genetic variants from massively parallel sequencing (MPS) projects. The system stores SNP and indel calls in a local database, designed to handle very large datasets, to allow for rapid analysis using simple commands in R. Functional annotations are included in the system, making it suitable for direct identification of disease-causing mutations in human exome- (WES) or whole-genome sequencing (WGS) projects. The system has a built-in filtering function implemented to simultaneously take into account variant calls from all individual samples. This enables advanced comparative analysis of variant distribution between groups of samples, including detection of candidate causative mutations within family structures and genome-wide association by sequencing. In most cases, these analyses are executed within just a matter of seconds, even when there are several hundreds of samples and millions of variants in the database. We demonstrate the scalability of canvasDB by importing the individual variant calls from all 1092 individuals present in the 1000 Genomes Project into the system, over 4.4 billion SNPs and indels in total. Our results show that canvasDB makes it possible to perform advanced analyses of large-scale WGS projects on a local server. Database URL: https://github.com/UppsalaGenomeCenter/CanvasDB. © The Author(s) 2014. Published by Oxford University Press.

  1. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 (Version 2.1) Catchments Riparian Buffer for the Conterminous United States: 2010 US Census Housing Unit and Population Density

    EPA Pesticide Factsheets

    This dataset represents the population and housing unit density within individual, local NHDPlusV2 catchments and upstream, contributing watersheds riparian buffers based on 2010 US Census data. Densities are calculated for every block group and watershed averages are calculated for every local NHDPlusV2 catchment(see Data Sources for links to NHDPlusV2 data and Census Data). This data set is derived from The TIGER/Line Files and related database (.dbf) files for the conterminous USA. It was downloaded as Block Group-Level Census 2010 SF1 Data in File Geodatabase Format (ArcGIS version 10.0). The landscape raster (LR) was produced based on the data compiled from the questions asked of all people and about every housing unit. The (block-group population / block group area) and (block-group housing units / block group area) were summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).

  2. Clothing resultant thermal insulation determined on a movable thermal manikin. Part I: effects of wind and body movement on total insulation.

    PubMed

    Lu, Yehu; Wang, Faming; Wan, Xianfu; Song, Guowen; Shi, Wen; Zhang, Chengjiao

    2015-10-01

    In this serial study, 486 thermal manikin tests were carried out to examine the effects of air velocity and walking speed on both total and local clothing thermal insulations. Seventeen clothing ensembles with different layers (i.e., one, two, or three layers) were selected for the study. Three different wind speeds (0.15, 1.55, 4.0 m/s) and three levels of walking speed (0, 0.75, 1.2 m/s) were chosen. Thus, there are totally nine different testing conditions. The clothing total insulation and local clothing insulation at different body parts under those nine conditions were determined. In part I, empirical equations for estimating total resultant clothing insulation as a function of the static thermal insulation, relative air velocity, and walking speed were developed. In part II, the local thermal insulation of various garments was analyzed and correction equations on local resultant insulation for each body part were developed. This study provides critical database for potential applications in thermal comfort study, modeling of human thermal strain, and functional clothing design and engineering.

  3. Relations between perceptual and conceptual scope: how global versus local processing fits a focus on similarity versus dissimilarity.

    PubMed

    Förster, Jens

    2009-02-01

    Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on an allegedly unrelated generation task. Across materials, participants generated more similarities than dissimilarities after global priming, whereas for participants with local priming, the opposite was true. Experiments 5-6 demonstrated a bidirectional link whereby participants who were first instructed to search for similarities attended more to the gestalt of a stimulus than to its details, whereas the reverse was true for those who were initially instructed to search for dissimilarities. Because important psychological variables are correlated with processing styles, in Experiments 7-9, temporal distance, a promotion focus, and high power were predicted and shown to enhance the search for similarities, whereas temporal proximity, a prevention focus, and low power enhanced the search for dissimilarities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  4. Explorations of Public Participation Approach to the Framing of Resilient Urbanism

    NASA Astrophysics Data System (ADS)

    Liu, Wei-Kuang; Liu, Li-Wei; Shiu, Yi-Shiang; Shen, Yang-Ting; Lin, Feng-Cheng; Hsieh, Hua-Hsuan

    2017-08-01

    Under the framework of developing resilient and livable cities, this study was aimed at engaging local communities to achieve the goal of public participation. Given the prevalence of smart mobile devices, an interactive app called “Citizen Probe” was designed to guide users to participate in building resilient and livable urban spaces by enabling users to report the condition of their living environment. The app collects feedback from users regarding the perceived condition of the urban environment, and this information is used to further develop an open online index system. The index system serves as a guide for the public to actively transform their city into a resilient and livable urban environment. The app was designed for the reporting of flood incidents with the objective of resilient disaster prevention, which can be achieved by enabling users to identify disaster conditions in order to develop a database for basic disaster information. The database can be used in the prevention and mitigation of disasters and to provide a foundation for developing indices for assessing the resilience and livability of urban areas. Three communities in Taichung, Taiwan, participated in the study. Residents of these communities were requested to use the app and identify local environmental conditions to obtain spatial data according to four stages in disaster response: assessment, readiness, response, and recovery. A volunteered geographic information database was developed to display maps for providing users with current reports of predisaster risk assessment, disaster response capacity, real-time disaster conditions, and overall disaster recovery. In addition, the database can be used as a useful tool for researchers to conduct GIS analyses and initiate related discussions. The interactive app raises public awareness on disaster prevention and makes disaster prevention a daily norm. Further discussion between the public and experts will be initiated to assist in policy management pertaining to the ongoing development of cities in addition to improving disaster prevention and response measures.

  5. Scale-up of networked HIV treatment in Nigeria: creation of an integrated electronic medical records system.

    PubMed

    Chaplin, Beth; Meloni, Seema; Eisen, Geoffrey; Jolayemi, Toyin; Banigbe, Bolanle; Adeola, Juliette; Wen, Craig; Reyes Nieva, Harry; Chang, Charlotte; Okonkwo, Prosper; Kanki, Phyllis

    2015-01-01

    The implementation of PEPFAR programs in resource-limited settings was accompanied by the need to document patient care on a scale unprecedented in environments where paper-based records were the norm. We describe the development of an electronic medical records system (EMRS) put in place at the beginning of a large HIV/AIDS care and treatment program in Nigeria. Databases were created to record laboratory results, medications prescribed and dispensed, and clinical assessments, using a relational database program. A collection of stand-alone files recorded different elements of patient care, linked together by utilities that aggregated data on national standard indicators and assessed patient care for quality improvement, tracked patients requiring follow-up, generated counts of ART regimens dispensed, and provided 'snapshots' of a patient's response to treatment. A secure server was used to store patient files for backup and transfer. By February 2012, when the program transitioned to local in-country management by APIN, the EMRS was used in 33 hospitals across the country, with 4,947,433 adult, pediatric and PMTCT records that had been created and continued to be available for use in patient care. Ongoing trainings for data managers, along with an iterative process of implementing changes to the databases and forms based on user feedback, were needed. As the program scaled up and the volume of laboratory tests increased, results were produced in a digital format, wherever possible, that could be automatically transferred to the EMRS. Many larger clinics began to link some or all of the databases to local area networks, making them available to a larger group of staff members, or providing the ability to enter information simultaneously where needed. The EMRS improved patient care, enabled efficient reporting to the Government of Nigeria and to U.S. funding agencies, and allowed program managers and staff to conduct quality control audits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Landscape features, standards, and semantics in U.S. national topographic mapping databases

    USGS Publications Warehouse

    Varanka, Dalia

    2009-01-01

    The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.

  7. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    NASA Astrophysics Data System (ADS)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  8. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    PubMed Central

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  9. Electronic Resources in a Next-Generation Catalog: The Case of WorldCat Local

    ERIC Educational Resources Information Center

    Shadle, Steve

    2009-01-01

    In April 2007, the University of Washington Libraries debuted WorldCat Local (WCL), a localized version of the WorldCat database that interoperates with a library's integrated library system and fulfillment services to provide a single-search interface for a library's physical and electronic content. This brief will describe how WCL incorporates a…

  10. ClubSub-P: Cluster-Based Subcellular Localization Prediction for Gram-Negative Bacteria and Archaea

    PubMed Central

    Paramasivam, Nagarajan; Linke, Dirk

    2011-01-01

    The subcellular localization (SCL) of proteins provides important clues to their function in a cell. In our efforts to predict useful vaccine targets against Gram-negative bacteria, we noticed that misannotated start codons frequently lead to wrongly assigned SCLs. This and other problems in SCL prediction, such as the relatively high false-positive and false-negative rates of some tools, can be avoided by applying multiple prediction tools to groups of homologous proteins. Here we present ClubSub-P, an online database that combines existing SCL prediction tools into a consensus pipeline from more than 600 proteomes of fully sequenced microorganisms. On top of the consensus prediction at the level of single sequences, the tool uses clusters of homologous proteins from Gram-negative bacteria and from Archaea to eliminate false-positive and false-negative predictions. ClubSub-P can assign the SCL of proteins from Gram-negative bacteria and Archaea with high precision. The database is searchable, and can easily be expanded using either new bacterial genomes or new prediction tools as they become available. This will further improve the performance of the SCL prediction, as well as the detection of misannotated start codons and other annotation errors. ClubSub-P is available online at http://toolkit.tuebingen.mpg.de/clubsubp/ PMID:22073040

  11. Evolutionary tools for phytosanitary risk analysis: phylogenetic signal as a predictor of host range of plant pests and pathogens

    PubMed Central

    Gilbert, Gregory S; Magarey, Roger; Suiter, Karl; Webb, Campbell O

    2012-01-01

    Assessing risk from a novel pest or pathogen requires knowing which local plant species are susceptible. Empirical data on the local host range of novel pests are usually lacking, but we know that some pests are more likely to attack closely related plant species than species separated by greater evolutionary distance. We use the Global Pest and Disease Database, an internal database maintained by the United States Department of Agriculture Animal and Plant Health Inspection Service – Plant Protection and Quarantine Division (USDA APHIS-PPQ), to evaluate the strength of the phylogenetic signal in host range for nine major groups of plant pests and pathogens. Eight of nine groups showed significant phylogenetic signal in host range. Additionally, pests and pathogens with more known hosts attacked a phylogenetically broader range of hosts. This suggests that easily obtained data – the number of known hosts and the phylogenetic distance between known hosts and other species of interest – can be used to predict which plant species are likely to be susceptible to a particular pest. This can facilitate rapid assessment of risk from novel pests and pathogens when empirical host range data are not yet available and guide efficient collection of empirical data for risk evaluation. PMID:23346231

  12. Preparing and Analyzing Expressed Sequence Tags (ESTs) Library for the Mammary Tissue of Local Turkish Kivircik Sheep

    PubMed Central

    Omeroglu Ulu, Zehra; Ulu, Salih; Un, Cemal; Ozdem Oztabak, Kemal; Altunatmaz, Kemal

    2017-01-01

    Kivircik sheep is an important local Turkish sheep according to its meat quality and milk productivity. The aim of this study was to analyze gene expression profiles of both prenatal and postnatal stages for the Kivircik sheep. Therefore, two different cDNA libraries, which were taken from the same Kivircik sheep mammary gland tissue at prenatal and postnatal stages, were constructed. Total 3072 colonies which were randomly selected from the two libraries were sequenced for developing a sheep ESTs collection. We used Phred/Phrap computer programs for analysis of the raw EST and readable EST sequences were assembled with the CAP3 software. Putative functions of all unique sequences and statistical analysis were determined by Geneious software. Total 422 ESTs have over 80% similarity to known sequences of other organisms in NCBI classified by Panther database for the Gene Ontology (GO) category. By comparing gene expression profiles, we observed some putative genes that may be relative to reproductive performance or play important roles in milk synthesis and secretion. A total of 2414 ESTs have been deposited to the NCBI GenBank database (GW996847–GW999260). EST data in this study have provided a new source of information to functional genome studies of sheep. PMID:28239610

  13. Evolutionary tools for phytosanitary risk analysis: phylogenetic signal as a predictor of host range of plant pests and pathogens.

    PubMed

    Gilbert, Gregory S; Magarey, Roger; Suiter, Karl; Webb, Campbell O

    2012-12-01

    Assessing risk from a novel pest or pathogen requires knowing which local plant species are susceptible. Empirical data on the local host range of novel pests are usually lacking, but we know that some pests are more likely to attack closely related plant species than species separated by greater evolutionary distance. We use the Global Pest and Disease Database, an internal database maintained by the United States Department of Agriculture Animal and Plant Health Inspection Service - Plant Protection and Quarantine Division (USDA APHIS-PPQ), to evaluate the strength of the phylogenetic signal in host range for nine major groups of plant pests and pathogens. Eight of nine groups showed significant phylogenetic signal in host range. Additionally, pests and pathogens with more known hosts attacked a phylogenetically broader range of hosts. This suggests that easily obtained data - the number of known hosts and the phylogenetic distance between known hosts and other species of interest - can be used to predict which plant species are likely to be susceptible to a particular pest. This can facilitate rapid assessment of risk from novel pests and pathogens when empirical host range data are not yet available and guide efficient collection of empirical data for risk evaluation.

  14. Development of structural vulnerability curve associated with high magnitude torrent occurrences in Switzerland

    NASA Astrophysics Data System (ADS)

    Wing-Yuen Chow, Candace; Bründl, Michael; Keiler, Margreth

    2017-04-01

    In mountain regions, high economic losses have increased significantly in the past decades due to severe hazard processes, in spite of notable investments in hazard management. Assessing the vulnerability of built structures to high magnitude torrent events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. While vulnerability curves have been developed for different countries, the presented work contributes new data from Swiss-based case studies that address a known gap associated with the consequences of high magnitude events. Data for this stage of the investigation communicates the degree of loss associated with affected structures and has been provided by local authorities dealing with natural hazards (e.g. Amt für Wald des Kantons Bern (KAWA) and cantonal insurance providers). Information used for the empirical quantification of vulnerability to torrent processes is derived from detailed post-event documentation and the loss database and verified with field visits. Building the initial database supports data sharing and the systematic inclusion of additional case studies as they become available. The collection of this new data is fundamental to the development of a local vulnerability curve based on observed sediment deposition heights, a proxy for describing hazard intensity. The result will then be compared to curves derived from Austrian and Italian datasets.

  15. Using distant supervised learning to identify protein subcellular localizations from full-text scientific articles.

    PubMed

    Zheng, Wu; Blake, Catherine

    2015-10-01

    Databases of curated biomedical knowledge, such as the protein-locations reflected in the UniProtKB database, provide an accurate and useful resource to researchers and decision makers. Our goal is to augment the manual efforts currently used to curate knowledge bases with automated approaches that leverage the increased availability of full-text scientific articles. This paper describes experiments that use distant supervised learning to identify protein subcellular localizations, which are important to understand protein function and to identify candidate drug targets. Experiments consider Swiss-Prot, the manually annotated subset of the UniProtKB protein knowledge base, and 43,000 full-text articles from the Journal of Biological Chemistry that contain just under 11.5 million sentences. The system achieves 0.81 precision and 0.49 recall at sentence level and an accuracy of 57% on held-out instances in a test set. Moreover, the approach identifies 8210 instances that are not in the UniProtKB knowledge base. Manual inspection of the 50 most likely relations showed that 41 (82%) were valid. These results have immediate benefit to researchers interested in protein function, and suggest that distant supervision should be explored to complement other manual data curation efforts. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Database architectures for Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  17. A spatio-temporal index for aerial full waveform laser scanning data

    NASA Astrophysics Data System (ADS)

    Laefer, Debra F.; Vo, Anh-Vu; Bertolotto, Michela

    2018-04-01

    Aerial laser scanning is increasingly available in the full waveform version of the raw signal, which can provide greater insight into and control over the data and, thus, richer information about the scanned scenes. However, when compared to conventional discrete point storage, preserving raw waveforms leads to vastly larger and more complex data volumes. To begin addressing these challenges, this paper introduces a novel bi-level approach for storing and indexing full waveform (FWF) laser scanning data in a relational database environment, while considering both the spatial and the temporal dimensions of that data. In the storage scheme's upper level, the full waveform datasets are partitioned into spatial and temporal coherent groups that are indexed by a two-dimensional R∗-tree. To further accelerate intra-block data retrieval, at the lower level a three-dimensional local octree is created for each pulse block. The local octrees are implemented in-memory and can be efficiently written to a database for reuse. The indexing solution enables scalable and efficient three-dimensional (3D) spatial and spatio-temporal queries on the actual pulse data - functionalities not available in other systems. The proposed FWF laser scanning data solution is capable of managing multiple FWF datasets derived from large flight missions. The flight structure is embedded into the data storage model and can be used for querying predicates. Such functionality is important to FWF data exploration since aircraft locations and orientations are frequently required for FWF data analyses. Empirical tests on real datasets of up to 1 billion pulses from Dublin, Ireland prove the almost perfect scalability of the system. The use of the local 3D octree in the indexing structure accelerated pulse clipping by 1.2-3.5 times for non-axis-aligned (NAA) polyhedron shaped clipping windows, while axis-aligned (AA) polyhedron clipping was better served using only the top indexing layer. The distinct behaviours of the hybrid indexing for AA and NAA clipping windows are attributable to the different proportion of the local-index-related overheads with respect to the total querying costs. When temporal constraints were added, generally the number of costly spatial checks were reduced, thereby shortening the querying times.

  18. myPhyloDB: a local web-server and database for the storage and analysis of metagenomics data

    USDA-ARS?s Scientific Manuscript database

    The advent of next-generation sequencing has resulted in an explosion of metagenomics data associated with microbial communities from a variety of ecosystems. However, no database and/or analytical software is currently available that allows for archival and cross-study comparison of such data. my...

  19. Listing of Education in Archaeological Programs: The LEAP Clearinghouse, 1989-1989 Summary Report.

    ERIC Educational Resources Information Center

    Knoll, Patricia C., Ed.

    This catalog incorporates information gathered between 1987 and 1989 for inclusion into the National Park Service's Listing of Education in Archaeological Programs (LEAP) computerized database. This database is a listing of federal, state, local and private projects promoting positive public awareness of U.S. archaeology--prehistoric and historic,…

  20. The Cocoa Shop: A Database Management Case

    ERIC Educational Resources Information Center

    Pratt, Renée M. E.; Smatt, Cindi T.

    2015-01-01

    This is an example of a real-world applicable case study, which includes background information on a small local business (i.e., TCS), description of functional business requirements, and sample data. Students are asked to design and develop a database to improve the management of the company's customers, products, and purchases by emphasizing…

  1. Helping Patrons Find Locally Held Electronic Resources: An Interlibrary Loan Perspective

    ERIC Educational Resources Information Center

    Johnston, Pamela

    2016-01-01

    The University of North Texas Libraries provide extensive online access to academic journals through major vendor databases. As illustrated by interlibrary loan borrowing requests for items held in our databases, patrons often have difficulty navigating the available resources. In this study, the Interlibrary Loan staff used data gathered from the…

  2. The CD-ROM Services of SilverPlatter Information, Inc.

    ERIC Educational Resources Information Center

    Allen, Robert J.

    1985-01-01

    The SilverPlatter system is a complete, stand-alone system, consisting of an IBM (or compatible) personal computer, compact disc with read-only memory (CD-ROM) drive, software, and one or more databases. Large databases (e.g., ERIC, PsycLIT) will soon be available on the system for "local" installation in schools, libraries, and…

  3. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  4. StarView: The object oriented design of the ST DADS user interface

    NASA Technical Reports Server (NTRS)

    Williams, J. D.; Pollizzi, J. A.

    1992-01-01

    StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.

  5. Updated Palaeotsunami Database for Aotearoa/New Zealand

    NASA Astrophysics Data System (ADS)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.

  6. Documenting the global impacts of beach sand mining

    NASA Astrophysics Data System (ADS)

    Young, R.; Griffith, A.

    2009-04-01

    For centuries, beach sand has been mined for use as aggregate in concrete, for heavy minerals, and for construction fill. The global extent and impact of this phenomenon has gone relatively unnoticed by academics, NGOs, and major news sources. Most reports of sand mining activities are found at the very local scale (if the mining is ever documented at all). Yet, sand mining in many localities has resulted in the complete destruction of beach (and related) ecosystems along with severe impacts to coastal protection and tourism. The Program for the Study of Developed Shorelines at Western Carolina University and Beachcare.org have initiated the construction of a global database of beach sand mining activities. The database is being built through a combination of site visits and through the data mining of media resources, peer reviewed papers, and reports from private and governmental entities. Currently, we have documented sand mining in 35 countries on 6 continents representing the removal of millions of cubic meters of sand. Problems extend from Asia where critical infrastructure has been disrupted by sand mining to the Caribbean where policy reform has swiftly followed a highly publicized theft of sand. The Program for the Study of Developed Shorelines recently observed extensive sand mining in Morocco at the regional scale. Tens of kilometers of beach have been stripped of sand and the mining continues southward reducing hope of a thriving tourism-based economy. Problems caused by beach sand mining include: destruction of natural beaches and the ecosystems they protect (e.g. dunes, wetlands), habitat loss for globally important species (e.g. turtles, shorebirds), destruction of nearshore marine ecosystems, increased shoreline erosion rates, reduced protection from storms, tsunamis, and wave events, and economic losses through tourist abandonment and loss of coastal aesthetics. The threats posed by sand mining are made even more critical given the prospect of a significant rise in global sea level over the coming decades. Most governments recognize the local impacts of sand mining and mining activities are illegal in many localities. However, enforcement of these protections has been problematic and there has been little pressure to stop the practice from local or international environmental groups. In many cases, addressing the issue of sand mining requires addressing the local issues that allow it to persist. This includes poverty, corruption, and unregulated development. In areas where beach sand mining significantly supports the local economy, care needs to be given that local workers are given alternative means of income, and builders are provided an affordable substitute for the sand (e.g. crushed rock). Regardless, it is time for both academics and NGOs to address the cumulative environmental impacts of the direct destruction of the world's beaches through mining activities.

  7. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  8. Progress connecting multi-disciplinary geoscience communities through the VIVO semantic web application

    NASA Astrophysics Data System (ADS)

    Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.

    2015-12-01

    UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.

  9. Validation of the European Prototype for Integrated Care at Municipal Level in Savona: Updating and Maintenance

    DTIC Science & Technology

    2001-10-25

    within one of the programmes sponsored by the European Commission.The system mainly consists of a shared care database in which each groups of...care database in which each community facility, or group of facilities, is supported by a local area network (LAN). Each of these LANs is connected over...functions. The software is layered, so that the client application is not affected by how the servers are implemented or which database system they use

  10. Effort Optimization in Minimizing Food Related Greenhouse Gas Emissions, a look at "Organic" and "Local"

    NASA Astrophysics Data System (ADS)

    Bowen, E.; Martin, P. A.; Eshel, G.

    2008-12-01

    The adverse environmental effects, especially energy use and resultant GHG emissions, of food production and consumption are becoming more widely appreciated and increasingly well documented. Our insights into the thorny problem of how to mitigate some of those effects, however, are far less evolved. Two of the most commonly advocated strategies are "organic" and "local", referring, respectively, to growing food without major inputs of fossil fuel based synthetic fertilizers and pesticides and to food consumption near its agricultural origin. Indeed, both agrochemical manufacture and transportation of produce to market make up a significant percentage of energy use in agriculture. While there can be unique environmental benefits to each strategy, "organic" and "local" each may potentially result in energy and emissions savings relative to conventionally grown produce. Here, we quantify the potential energy and greenhouse gas emissions savings associated with "organic" and "local". We take note of energy use and actual GHG costs of the major synthetic fertilizers and transportation by various modes routinely employed in agricultural distribution chains, and compare them for ~35 frequently consumed nutritional mainstays. We present new, current, lower-bound energy and greenhouse gas efficiency estimates for these items and compare energy consumption and GHG emissions incurred during producing those food items to consumption and emissions resulting from transporting them, considering travel distances ranging from local to continental and transportation modes ranging from (most efficient) rail to (least efficient) air. In performing those calculations, we demonstrate the environmental superiority of either local or organic over conventional foods, and illuminate the complexities involved in entertaining the timely yet currently unanswered, and previously unanswerable, question of "Which is Environmentally Superior, Organic or Local?". More broadly, we put forth a database that amounts to a general blueprint for rigorous comparative evaluation of any competing diets.

  11. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  12. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  13. The Eruption Forecasting Information System: Volcanic Eruption Forecasting Using Databases

    NASA Astrophysics Data System (ADS)

    Ogburn, S. E.; Harpel, C. J.; Pesicek, J. D.; Wellik, J.

    2016-12-01

    Forecasting eruptions, including the onset size, duration, location, and impacts, is vital for hazard assessment and risk mitigation. The Eruption Forecasting Information System (EFIS) project is a new initiative of the US Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) and will advance VDAP's ability to forecast the outcome of volcanic unrest. The project supports probability estimation for eruption forecasting by creating databases useful for pattern recognition, identifying monitoring data thresholds beyond which eruptive probabilities increase, and for answering common forecasting questions. A major component of the project is a global relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest. This module allows us to query eruption chronologies, monitoring data, descriptive information, operational data, and eruptive phases alongside other global databases, such as WOVOdat and the Global Volcanism Program. The EFIS database is in the early stages of development and population; thus, this contribution also is a request for feedback from the community. Preliminary data are already benefitting several research areas. For example, VDAP provided a forecast of the likely remaining eruption duration for Sinabung volcano, Indonesia, using global data taken from similar volcanoes in the DomeHaz database module, in combination with local monitoring time-series data. In addition, EFIS seismologists used a beta-statistic test and empirically-derived thresholds to identify distal volcano-tectonic earthquake anomalies preceding Alaska volcanic eruptions during 1990-2015 to retrospectively evaluate Alaska Volcano Observatory eruption precursors. This has identified important considerations for selecting analog volcanoes for global data analysis, such as differences between closed and open system volcanoes.

  14. Assessing the quality of life history information in publicly available databases.

    PubMed

    Thorson, James T; Cope, Jason M; Patrick, Wesley S

    2014-01-01

    Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.

  15. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    PubMed

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  16. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network.

    PubMed

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-11

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  17. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    PubMed Central

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172

  18. Impact of Local Resources on Hospitalization Patterns of Medicare Beneficiaries and Propensity to Travel outside Local Markets

    ERIC Educational Resources Information Center

    Basu, Jayasree; Mobley, Lee R.

    2010-01-01

    Purpose: To examine how local health care resources impact travel patterns of patients age 65 and older across the rural urban continuum. Methods: Information on inpatient hospital discharges was drawn from complete 2004 hospital discharge files from the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SID) for New York,…

  19. Subject-specific longitudinal shape analysis by coupling spatiotemporal shape modeling with medial analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungmin; Fishbaugh, James; Rezanejad, Morteza; Siddiqi, Kaleem; Johnson, Hans; Paulsen, Jane; Kim, Eun Young; Gerig, Guido

    2017-02-01

    Modeling subject-specific shape change is one of the most important challenges in longitudinal shape analysis of disease progression. Whereas anatomical change over time can be a function of normal aging, anatomy can also be impacted by disease related degeneration. Anatomical shape change may also be affected by structural changes from neighboring shapes, which may cause non-linear variations in pose. In this paper, we propose a framework to analyze disease related shape changes by coupling extrinsic modeling of the ambient anatomical space via spatiotemporal deformations with intrinsic shape properties from medial surface analysis. We compare intrinsic shape properties of a subject-specific shape trajectory to a normative 4D shape atlas representing normal aging to isolate shape changes related to disease. The spatiotemporal shape modeling establishes inter/intra subject anatomical correspondence, which in turn enables comparisons between subjects and the 4D shape atlas, and also quantitative analysis of disease related shape change. The medial surface analysis captures intrinsic shape properties related to local patterns of deformation. The proposed framework jointly models extrinsic longitudinal shape changes in the ambient anatomical space, as well as intrinsic shape properties to give localized measurements of degeneration. Six high risk subjects and six controls are randomly sampled from a Huntington's disease image database for qualitative and quantitative comparison.

  20. A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Ansari, S.; Del Greco, S.

    2007-12-01

    The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.

  1. [A method for identifying people with a high level of frailty by using a population database, Varese, Italy].

    PubMed

    Pisani, Salvatore; Gambino, Maria; Balconi, Lorena; Degli Stefani, Cristina; Speziali, Sabina; Bonarrigo, Domenico

    2016-01-01

    Since over 10 years, the Lombardy Region (Italy) has developed a system for classifying all persons registered with the healthcare system (database of persons registered with a general practitioner), according to their use of major healthcare services (hospitalizations, outpatient consultations, pharmaceutical) and whether they are exempt from copayment fees for disease-specific medications and healthcare services. The present study was conducted by the local health authorities of the province of Varese (Lombardy region, Italy) with 894.039 persons registered in the database of whom 258.770 (28.9%) with at least one chronic condition, 104.731 (11.7%) with multiple chronic conditions and 195.296 (21.8%) elderly persons. The aim was to evaluate death rates in different subgroups of patients entered in the database, including persons with chronic diseases and elderly persons. Standardized mortality rates were calculated for the year 2012. Compared with the general population, relative risk for mortality was 4,1 (95% confidence Intervals 4,0-4,2) in the elderly and 1,3 (95% confidence intervals 1,3-1,4) in chronic patients. This confirms that elderly persons have a higher level of frailty with respect to patients with chronic conditions. Mortality was found to be 28 times higher in elderly persons over 74 years of age, affected by high cost conditions such as cancer and cardiac disease, with respect to the general population.

  2. Earthquake-induced ground failures in Italy from a reviewed database

    NASA Astrophysics Data System (ADS)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2014-04-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground changes triggered by earthquakes of Mercalli epicentral intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (http://www.ceri.uniroma1.it/cn/gis.jsp ) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the Sapienza University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  3. pGenN, a Gene Normalization Tool for Plant Genes and Proteins in Scientific Literature

    PubMed Central

    Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.

    2015-01-01

    Background Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. Methods In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. Results We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9% (Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource.org/iprolink/). PMID:26258475

  4. Data integration and warehousing: coordination between newborn screening and related public health programs.

    PubMed

    Therrell, Bradford L

    2003-01-01

    At birth, patient demographic and health information begin to accumulate in varied databases. There are often multiple sources of the same or similar data. New public health programs are often created without considering data linkages. Recently, newborn hearing screening (NHS) programs and immunization programs have virtually ignored the existence of newborn dried blood spot (DBS) newborn screening databases containing similar demographic data, creating data duplication in their 'new' systems. Some progressive public health departments are developing data warehouses of basic, recurrent patient information, and linking these databases to other health program databases where programs and services can benefit from such linkages. Demographic data warehousing saves time (and money) by eliminating duplicative data entry and reducing the chances of data errors. While newborn screening data are usually the first data available, they should not be the only data source considered for early data linkage or for populating a data warehouse. Birth certificate information should also be considered along with other data sources for infants that may not have received newborn screening or who may have been born outside of the jurisdiction and not have birth certificate information locally available. This newborn screening serial number provides a convenient identification number for use in the DBS program and for linking with other systems. As a minimum, data linkages should exist between newborn dried blood spot screening, newborn hearing screening, immunizations, birth certificates and birth defect registries.

  5. epiPATH: an information system for the storage and management of molecular epidemiology data from infectious pathogens.

    PubMed

    Amadoz, Alicia; González-Candelas, Fernando

    2007-04-20

    Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.

  6. What do data used to develop ground-motion prediction equations tell us about motions near faults?

    USGS Publications Warehouse

    Boore, David M.

    2014-01-01

    A large database of ground motions from shallow earthquakes occurring in active tectonic regions around the world, recently developed in the Pacific Earthquake Engineering Center’s NGA-West2 project, has been used to investigate what such a database can say about the properties and processes of crustal fault zones. There are a relatively small number of near-rupture records, implying that few recordings in the database are within crustal fault zones, but the records that do exist emphasize the complexity of ground-motion amplitudes and polarization close to individual faults. On average over the whole data set, however, the scaling of ground motions with magnitude at a fixed distance, and the distance dependence of the ground motions, seem to be largely consistent with simple seismological models of source scaling, path propagation effects, and local site amplification. The data show that ground motions close to large faults, as measured by elastic response spectra, tend to saturate and become essentially constant for short periods. This saturation seems to be primarily a geometrical effect, due to the increasing size of the rupture surface with magnitude, and not due to a breakdown in self similarity.

  7. RNA Bricks—a database of RNA 3D motifs and their interactions

    PubMed Central

    Chojnowski, Grzegorz; Waleń, Tomasz; Bujnicki, Janusz M.

    2014-01-01

    The RNA Bricks database (http://iimcb.genesilico.pl/rnabricks), stores information about recurrent RNA 3D motifs and their interactions, found in experimentally determined RNA structures and in RNA–protein complexes. In contrast to other similar tools (RNA 3D Motif Atlas, RNA Frabase, Rloom) RNA motifs, i.e. ‘RNA bricks’ are presented in the molecular environment, in which they were determined, including RNA, protein, metal ions, water molecules and ligands. All nucleotide residues in RNA bricks are annotated with structural quality scores that describe real-space correlation coefficients with the electron density data (if available), backbone geometry and possible steric conflicts, which can be used to identify poorly modeled residues. The database is also equipped with an algorithm for 3D motif search and comparison. The algorithm compares spatial positions of backbone atoms of the user-provided query structure and of stored RNA motifs, without relying on sequence or secondary structure information. This enables the identification of local structural similarities among evolutionarily related and unrelated RNA molecules. Besides, the search utility enables searching ‘RNA bricks’ according to sequence similarity, and makes it possible to identify motifs with modified ribonucleotide residues at specific positions. PMID:24220091

  8. What Do Data Used to Develop Ground-Motion Prediction Equations Tell Us About Motions Near Faults?

    NASA Astrophysics Data System (ADS)

    Boore, David M.

    2014-11-01

    A large database of ground motions from shallow earthquakes occurring in active tectonic regions around the world, recently developed in the Pacific Earthquake Engineering Center's NGA-West2 project, has been used to investigate what such a database can say about the properties and processes of crustal fault zones. There are a relatively small number of near-rupture records, implying that few recordings in the database are within crustal fault zones, but the records that do exist emphasize the complexity of ground-motion amplitudes and polarization close to individual faults. On average over the whole data set, however, the scaling of ground motions with magnitude at a fixed distance, and the distance dependence of the ground motions, seem to be largely consistent with simple seismological models of source scaling, path propagation effects, and local site amplification. The data show that ground motions close to large faults, as measured by elastic response spectra, tend to saturate and become essentially constant for short periods. This saturation seems to be primarily a geometrical effect, due to the increasing size of the rupture surface with magnitude, and not due to a breakdown in self similarity.

  9. Designing a data portal for synthesis modeling

    NASA Astrophysics Data System (ADS)

    Holmes, M. A.

    2006-12-01

    Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.

  10. A Relational Database System for Student Use.

    ERIC Educational Resources Information Center

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  11. PDB_TM: selection and membrane localization of transmembrane proteins in the protein data bank.

    PubMed

    Tusnády, Gábor E; Dosztányi, Zsuzsanna; Simon, István

    2005-01-01

    PDB_TM is a database for transmembrane proteins with known structures. It aims to collect all transmembrane proteins that are deposited in the protein structure database (PDB) and to determine their membrane-spanning regions. These assignments are based on the TMDET algorithm, which uses only structural information to locate the most likely position of the lipid bilayer and to distinguish between transmembrane and globular proteins. This algorithm was applied to all PDB entries and the results were collected in the PDB_TM database. By using TMDET algorithm, the PDB_TM database can be automatically updated every week, keeping it synchronized with the latest PDB updates. The PDB_TM database is available at http://www.enzim.hu/PDB_TM.

  12. Reduced carbon intensity in highly developed countries: environmental kuznets curves for carbon dioxide

    NASA Astrophysics Data System (ADS)

    Kornhuber, Kai; Rybski, Diego; Costa, Luis; Reusser, Dominik E.; Kropp, Jürgen P.

    2014-05-01

    The Environmental Kuznets Curves (EKC) postulates that pollution increases with the income per capita up to a maximum, above which it decreases with the further increase in income per capita, i.e. following an inverse U-shape in the pollution vs. income per capita. It is commonly believed that EKC occurs for "local" pollutants such as nitrogen oxide and sulfur dioxide, but does not hold for CO2 emissions. This is attributed to the fact that while "local" pollutants cause a visible environmental damage on the local/regional scale (which authorities/governments seek to avoid), the consequences of CO2 emission have no immediate attributable local/regional consequences. We review EKC for CO2 exploring its relation between CO2 per capita and the Human Development Index (HDI) between 1990 and 2010 obtained from the World Bank database. We find evidence for a reduction in CO2 emissions per capita in highly developed countries. We propose a model according to which the emissions per capita of a country are composed of a component related to the actual state of development and a component related to the change of development. The model leads to four distinct cases of which two have EKC shape and two imply saturation. This outcome is in line with previously suggested qualitative relations. Our analysis indicates that the EKC shaped cases better describes the empirical values. We explore the less extreme version corresponding to the so-called conventional EKC and study the maximum of the fitted curve, providing a threshold-value for the HDI and a typical maximum value for the emissions per capita. We find that approx. 5 countries have crossed the CO2-HDI maximum, corresponding to approx. 1.5% of the world population.

  13. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  14. A new phase-correlation-based iris matching for degraded images.

    PubMed

    Krichen, Emine; Garcia-Salicetti, Sonia; Dorizzi, Bernadette

    2009-08-01

    In this paper, we present a new phase-correlation-based iris matching approach in order to deal with degradations in iris images due to unconstrained acquisition procedures. Our matching system is a fusion of global and local Gabor phase-correlation schemes. The main originality of our local approach is that we do not only consider the correlation peak amplitudes but also their locations in different regions of the images. Results on several degraded databases, namely, the CASIA-BIOSECURE and Iris Challenge Evaluation 2005 databases, show the improvement of our method compared to two available reference systems, Masek and Open Source for Iris (OSRIS), in verification mode.

  15. Relational-database model for improving quality assurance and process control in a composite manufacturing environment

    NASA Astrophysics Data System (ADS)

    Gentry, Jeffery D.

    2000-05-01

    A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.

  16. Methanol poisoning among travellers to Indonesia.

    PubMed

    Giovanetti, Franco

    2013-01-01

    Common Travel Medicine sources generally do not provide information on the risk of methanol poisoning among travellers who visit Indonesia. The aim of this analysis was to increase knowledge on this topic through reports from bibliographic databases and Internet sources. Case reports and studies on methanol poisoning in Indonesia were retrieved through PubMed, Embase and Google Scholar database searching. The Google search was used to retrieve the Web Media articles reporting fatal and non-fatal methanol poisoning in Indonesia, in a timeframe from 01.01.2009 to 03.03.2013. Three case reports of methanol poisoning involving four travellers to Indonesia were found in bibliographic databases. The media sources searching identified 14 articles published online, reporting 22 cases of methanol poisoning among travellers after consumption of local alcohol beverages. The total number of death cases was 18. Some sources report also a large number of cases among the local population. Methanol poisoning is likely to be an emerging public health problem in Indonesia, with an associated morbidity and mortality among travellers and local people. Some strategies can be implemented to prevent or reduce harm among travellers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Searching LOGIN, the Local Government Information Network.

    ERIC Educational Resources Information Center

    Jack, Robert F.

    1984-01-01

    Describes a computer-based information retrieval and electronic messaging system produced by Control Data Corporation now being used by government agencies and other organizations. Background of Local Government Information Network (LOGIN), database structure, types of LOGIN units, searching LOGIN (intersect, display, and list commands), and how…

  18. Clinical Perspectives from Randomized Phase 3 Trials on Prostate Cancer: An Analysis of the ClinicalTrials.gov Database.

    PubMed

    Tsikkinis, Alexandros; Cihoric, Nikola; Giannarini, Gianluca; Hinz, Stefan; Briganti, Alberto; Wust, Peter; Ost, Piet; Ploussard, Guillaume; Massard, Christophe; Surcel, Cristian I; Sooriakumaran, Prasanna; Isbarn, Hendrik; De Visschere, Peter J L; Futterer, Jurgen J; van der Bergh, Roderick C N; Dal Pra, Alan; Aebersold, Daniel M; Budach, Volker; Ghadjar, Pirus

    2015-09-01

    It is not easy to overview pending phase 3 trials on prostate cancer (PCa), and awareness of these trials would benefit clinicians. To identify all phase 3 trials on PCa registered in the ClinicalTrials.gov database with pending results. On September 29, 2014, a database was established from the records for 175 538 clinical trials registered on ClinicalTrials.gov. A search of this database for the substring "prostat" identified 2951 prostate trials. Phase 3 trials accounted for 441 studies, of which 333 concerned only PCa. We selected only ongoing or completed trials with pending results, that is, for which the primary endpoint had not been published in a peer-reviewed medical journal. We identified 123 phase 3 trials with pending results. Trials were conducted predominantly in North America (n=63; 51%) and Europe (n=47; 38%). The majority were on nonmetastatic disease (n=82; 67%), with 37 (30%) on metastatic disease and four trials (3%) including both. In terms of intervention, systemic treatment was most commonly tested (n=71; 58%), followed by local treatment 34 (28%), and both systemic and local treatment (n=11; 9%), with seven (6%) trials not classifiable. The 71 trials on systemic treatment included androgen deprivation therapy (n=34; 48%), chemotherapy (n=15; 21%), immunotherapy (n=9; 13%), other systemic drugs (n=9; 13%), radiopharmaceuticals (n=2; 3%), and combinations (n=2; 3%). Local treatments tested included radiation therapy (n=27; 79%), surgery (n=5; 15%), and both (n=2; 2%). A limitation is that not every clinical trial is registered on ClinicalTrials.gov. There are many PCa phase 3 trials with pending results, most of which address questions regarding systemic treatments for both nonmetastatic and metastatic disease. Radiation therapy and androgen deprivation therapy are the interventions most commonly tested for local and systemic treatment, respectively. This report describes all phase 3 trials on prostate cancer registered in the ClinicalTrials.gov database with pending results. Most of these trials address questions regarding systemic treatments for both nonmetastatic and metastatic disease. Radiation therapy and androgen deprivation therapy are the interventions most commonly tested for local and systemic treatment, respectively. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  19. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  20. Relational Databases and Biomedical Big Data.

    PubMed

    de Silva, N H Nisansa D

    2017-01-01

    In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.

  1. On Quasi-biennial Oscillations in Chromospheric Macrospicules and Their Potential Relation to the Global Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Kiss, T. S.; Erdélyi, R.

    2018-04-01

    This study aims to provide further evidence for the potential influence of the global solar magnetic field on localized chromospheric jets, the macrospicules (MS). To find a connection between the long-term variation of properties of MS and other solar activity proxies, including, e.g., the temporal variation of the frequency shift of solar global oscillations, sunspot area, etc., a database overarching seven years of observations was compiled. This database contains 362 MS, based on observations at the 30.4 nm of the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. Three of the five investigated physical properties of MS show a clear long-term temporal variation after smoothing the raw data. Wavelet analysis of the temporal variation of maximum length, maximum area, and average velocity is carried out. The results reveal a strong pattern of periodicities at around 2 years (also referred to as quasi-biennial oscillations—QBOs). A comparison with solar activity proxies that also possess the properties of QBOs provides some interesting features: the minima and maxima of QBOs of MS properties occur at around the same epoch as the minima and maxima of these activity proxies. For most of the time span investigated, the oscillations are out of phase. This out-of-phase behavior was also corroborated by a cross-correlation analysis. These results suggest that the physical processes that generate and drive the long-term evolution of the global solar activity proxies may be coupled to the short-term local physical processes driving the macrospicules, and, therefore modulate the properties of local dynamics.

  2. Gridding Global δ 18Owater and Interpreting Core Top δ 18Oforam

    NASA Astrophysics Data System (ADS)

    Legrande, A. N.; Schmidt, G.

    2004-05-01

    Estimations of the oxygen isotope ratio in seawater (δ 18O water) traditionally have relied on regional δ 18O water to salinity relationships to convert seawater salinity into δ 18O water. This indirect method of determining δ 18O water is necessary since ?18Owater measurements are relatively sparse. We improve upon this process by constructing local δ 18O water to salinity curves using the Schmidt et al. (1999) global database of δ 18O water and salinity. We calculate local δ 18O water to salinity relationship on a 1x1 grid based on the closest database points to each grid box. Each ocean basin is analyzed separately, and each curve is processed to exclude outliers. These local relationships in combination with seawater salinity (Levitus, 1994) allow us to construct a global map of δ 18O water on a 1x1 grid. We combine seawater temperature (Levitus, 1994) with this dataset to predict δ 18O calcite on a 1x1 grid. These predicted values are then compared to previous compilations of core top δ 18O foram data for individual species of foraminifera. This comparison provides insight into the calcification habitats (as inferred by seawater temperature and salinity) of these species. Additionally, we compare the 1x1 grid of δ 18O water to preliminary output from the latest GISS coupled Atmosphere/Ocean GCM that tracks water isotopes through the hydrologic cycle. This comparison provides insight into possible model applications as a tool to aid in interpreting paleo-isotope data.

  3. Relation between burden of disease and randomised evidence in sub-Saharan Africa: survey of research.

    PubMed

    Isaakidis, Petros; Swingler, George H; Pienaar, Elizabeth; Volmink, Jimmy; Ioannidis, John P A

    2002-03-23

    To evaluate whether the amount of randomised clinical research on various medical conditions is related to the burden of disease and health needs of the local populations in sub-Saharan Africa. Construction and analysis of comprehensive database of randomised controlled trials in sub-Saharan Africa based on Medline, the Cochrane Controlled Trials Register, and several African databases. Sub-Saharan Africa. Number of trials and randomised subjects for each category of disease in the global burden of disease taxonomy; ratios of disability adjusted life years (DALYs) per amount of randomised evidence. 1179 eligible randomised controlled trials were identified. The number of trials published each year increased over time. Almost half of the trials (n=565) had been done in South Africa. There was relatively good correlation between the estimated burden of disease at year 2000 and the number of trials performed (r=0.53, P=0.024) and the number of participants randomised (r=0.68, P=0.002). However,some conditions-for example, injuries (over 20 000 DALYs per patient ever randomised)-were more neglected than others. Despite recent improvements, few clinical trials are done in sub-Saharan Africa. Clinical research in this part of the world should focus more evenly on the major contributors to burden of disease.

  4. Antibiotic resistance genes across a wide variety of metagenomes.

    PubMed

    Fitzpatrick, David; Walsh, Fiona

    2016-02-01

    The distribution of potential clinically relevant antibiotic resistance (AR) genes across soil, water, animal, plant and human microbiomes is not well understood. We aimed to investigate if there were differences in the distribution and relative abundances of resistance genes across a variety of ecological niches. All sequence reads (human, animal, water, soil, plant and insect metagenomes) from the MG-RAST database were downloaded and assembled into a local sequence database. We show that there are many reservoirs of the basic form of resistance genes e.g. blaTEM, but the human and mammalian gut microbiomes contain the widest diversity of clinically relevant resistance genes using metagenomic analysis. The human microbiomes contained a high relative abundance of resistance genes, while the relative abundances varied greatly in the marine and soil metagenomes, when datasets with greater than one million genes were compared. While these results reflect a bias in the distribution of AR genes across the metagenomes, we note this interpretation with caution. Metagenomics analysis includes limits in terms of detection and identification of AR genes in complex and diverse microbiome population. Therefore, if we do not detect the AR gene is it in fact not there or just below the limits of our techniques? © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. A Relational Algebra Query Language for Programming Relational Databases

    ERIC Educational Resources Information Center

    McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole

    2011-01-01

    In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…

  6. 77 FR 52748 - 60-Day Proposed Information Collection: Indian Health Service (IHS) Sharing What Works-Best...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... Form (OMB Form No. 0917-0034). Need and Use of Information Collection: The IHS goal is to raise the... Prevention (HP/DP), Nursing, and Dental) have developed a centralized program database of Best/Promising Practices and Local Efforts (BPPPLE) and resources. The purpose of this collection is to develop a database...

  7. 75 FR 35305 - Local Number Portability Porting Interval and Validation Requirements; Telephone Number Portability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... database administrators proceed in a clear and orderly fashion so that porting requests can be handled in... communications between service providers and database administrators proceed in a clear and orderly fashion so... Bureau data for 2002 show that there were a total of 371 firms that operated for the entire year. Of this...

  8. Listing of Education in Archaeological Programs: The LEAP Clearinghouse 1990-1991 Summary Report.

    ERIC Educational Resources Information Center

    Knoll, Patricia C., Ed.

    This is the second catalog of the National Park Service's Listing of Education in Archaeological Programs (LEAP). It consists of the information incorporated into the LEAP computerized database between 1990 and 1991. The database is a listing of federal, state, local, and private projects promoting public awareness of U.S. archaeology including…

  9. Cancer-related marketing centrality motifs acting as pivot units in the human signaling network and mediating cross-talk between biological pathways.

    PubMed

    Li, Wan; Chen, Lina; Li, Xia; Jia, Xu; Feng, Chenchen; Zhang, Liangcai; He, Weiming; Lv, Junjie; He, Yuehan; Li, Weiguo; Qu, Xiaoli; Zhou, Yanyan; Shi, Yuchen

    2013-12-01

    Network motifs in central positions are considered to not only have more in-coming and out-going connections but are also localized in an area where more paths reach the networks. These central motifs have been extensively investigated to determine their consistent functions or associations with specific function categories. However, their functional potentials in the maintenance of cross-talk between different functional communities are unclear. In this paper, we constructed an integrated human signaling network from the Pathway Interaction Database. We identified 39 essential cancer-related motifs in central roles, which we called cancer-related marketing centrality motifs, using combined centrality indices on the system level. Our results demonstrated that these cancer-related marketing centrality motifs were pivotal units in the signaling network, and could mediate cross-talk between 61 biological pathways (25 could be mediated by one motif on average), most of which were cancer-related pathways. Further analysis showed that molecules of most marketing centrality motifs were in the same or adjacent subcellular localizations, such as the motif containing PI3K, PDK1 and AKT1 in the plasma membrane, to mediate signal transduction between 32 cancer-related pathways. Finally, we analyzed the pivotal roles of cancer genes in these marketing centrality motifs in the pathogenesis of cancers, and found that non-cancer genes were potential cancer-related genes.

  10. jSPyDB, an open source database-independent tool for data management

    NASA Astrophysics Data System (ADS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  11. A dynamic clinical dental relational database.

    PubMed

    Taylor, D; Naguib, R N G; Boulton, S

    2004-09-01

    The traditional approach to relational database design is based on the logical organization of data into a number of related normalized tables. One assumption is that the nature and structure of the data is known at the design stage. In the case of designing a relational database to store historical dental epidemiological data from individual clinical surveys, the structure of the data is not known until the data is presented for inclusion into the database. This paper addresses the issues concerned with the theoretical design of a clinical dynamic database capable of adapting the internal table structure to accommodate clinical survey data, and presents a prototype database application capable of processing, displaying, and querying the dental data.

  12. New insights into the hydrostratigraphy of the High Plains aquifer from three-dimensional visualizations based on well records

    USGS Publications Warehouse

    Macfarlane, P.A.

    2009-01-01

    Regional aquifers in thick sequences of continentally derived heterolithic deposits, such as the High Plains of the North American Great Plains, are difficult to characterize hydrostratigraphically because of their framework complexity and the lack of high-quality subsurface information from drill cores and geophysical logs. However, using a database of carefully evaluated drillers' and sample logs and commercially available visualization software, it is possible to qualitatively characterize these complex frameworks based on the concept of relative permeability. Relative permeability is the permeable fraction of a deposit expressed as a percentage of its total thickness. In this methodology, uncemented coarse and fine sediments are arbitrarily set at relative permeabilities of 100% and 0%, respectively, with allowances made for log entries containing descriptions of mixed lithologies, heterolithic strata, and cementation. To better understand the arrangement of high- and low-permeability domains within the High Plains aquifer, a pilot study was undertaken in southwest Kansas to create three-dimensional visualizations of relative permeability using a database of >3000 logs. Aggregate relative permeability ranges up to 99% with a mean of 51%. Laterally traceable, thick domains of >80% relative permeability embedded within a lower relative permeability matrix strongly suggest that preferred pathways for lateral and vertical water transmission exist within the aquifer. Similarly, domains with relative permeabilities of <45% are traceable laterally over appreciable distances in the sub-surface and probably act as leaky confining layers. This study shows that the aquifer does not consist solely of local, randomly distributed, hydrostratigraphic units, as suggested by previous studies. ?? 2009 Geological Society of America.

  13. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.

  14. The Danish Testicular Cancer database.

    PubMed

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  15. Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research.

    PubMed

    O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore

    2014-12-01

    Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring. © 2014 European Sleep Research Society.

  16. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Global and Local Collaborators: A Study of Scientific Collaboration.

    ERIC Educational Resources Information Center

    Pao, Miranda Lee

    1992-01-01

    Describes an empirical study that was conducted to examine the relationship among scientific co-authorship (i.e., collaboration), research funding, and productivity. Bibliographic records from the MEDLINE database that used the subject heading for schistosomiasis are analyzed, global and local collaborators are discussed, and scientific…

  18. The PyCASSO database: spatially resolved stellar population properties for CALIFA galaxies

    NASA Astrophysics Data System (ADS)

    de Amorim, A. L.; García-Benito, R.; Cid Fernandes, R.; Cortijo-Ferrero, C.; González Delgado, R. M.; Lacerda, E. A. D.; López Fernández, R.; Pérez, E.; Vale Asari, N.

    2017-11-01

    The Calar Alto Legacy Integral Field Area (CALIFA) survey, a pioneer in integral field spectroscopy legacy projects, has fostered many studies exploring the information encoded on the spatially resolved data on gaseous and stellar features in the optical range of galaxies. We describe a value-added catalogue of stellar population properties for CALIFA galaxies analysed with the spectral synthesis code starlight and processed with the pycasso platform. Our public database (http://pycasso.ufsc.br/, mirror at http://pycasso.iaa.es/) comprises 445 galaxies from the CALIFA Data Release 3 with COMBO data. The catalogue provides maps for the stellar mass surface density, mean stellar ages and metallicities, stellar dust attenuation, star formation rates, and kinematics. Example applications both for individual galaxies and for statistical studies are presented to illustrate the power of this data set. We revisit and update a few of our own results on mass density radial profiles and on the local mass-metallicity relation. We also show how to employ the catalogue for new investigations, and show a pseudo Schmidt-Kennicutt relation entirely made with information extracted from the stellar continuum. Combinations to other databases are also illustrated. Among other results, we find a very good agreement between star formation rate surface densities derived from the stellar continuum and the H α emission. This public catalogue joins the scientific community's effort towards transparency and reproducibility, and will be useful for researchers focusing on (or complementing their studies with) stellar properties of CALIFA galaxies.

  19. EST-PAC a web package for EST annotation and protein sequence prediction

    PubMed Central

    Strahm, Yvan; Powell, David; Lefèvre, Christophe

    2006-01-01

    With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST) from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST) annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1) searching local or remote biological databases for sequence similarities using Blast services, 2) predicting protein coding sequence from EST data and, 3) annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics. PMID:17147782

  20. Role of liver-directed local tumor therapy in the management of hepatocellular carcinoma with extrahepatic metastases: a SEER database analysis.

    PubMed

    Abdel-Rahman, Omar

    2017-02-01

    This study assessed the prognostic impact of the liver-directed local tumor therapy in the management of hepatocellular carcinoma (HCC) with extrahepatic metastases. Metastatic HCC patients diagnosed between 2004 and 2013 were identified from the SEER (Surveillance, Epidemiology, and End Results) database. Propensity-matched analysis was performed considering baseline characteristics (age, gender, race, histology, TNM stage, site of metastases, fibrosis score and alpha fetoprotein). A total of 2529 patients were identified. The median age was 65 years, and 151 patients received liver-directed local treatment (either surgical treatment or local destructive treatment). Both before and after propensity score matching, cancer-specific and overall survival (p < 0.0001 for all) were better in the liver-directed local therapy group. When the overall survival was stratified by the type of local treatment (surgical resection versus destructive treatment), both types of treatment improved overall survival (p < 0.0001 for both). In multivariate analysis of the matched population, the only factor correlated with better survival receiving is local therapy (p < 0.0001). This analysis suggests that liver-directed local treatment may play a role -in addition to systemic treatment- in the management of selected patients with metastatic HCC. Further prospective randomized controlled trials are needed to confirm or deny this hypothesis.

  1. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  2. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  3. The Holocene palaeogeography and relative sea level for two tidal basins of the German North Sea coast

    NASA Astrophysics Data System (ADS)

    Bungenstock, Friederike; Wartenberg, Wolfram; Mauz, Barbara; Freund, Holger; Frechen, Manfred; Weerts, Henk J. T.; Berner, Heinrich

    2014-05-01

    The response of coasts to global sea-level rise is highly variable. Knowledge of driving coastal parameters alongside the regional sea-level history is therefore indispensable when the response to global sea-level rise is to be assessed. Here, we study the Holocene relative sea-level of the south coast of the North Sea which is controlled by a number of very local parameters, as well as by regional glacio-isostatic adjustments. It is therefore crucial to restrict the data acquisition and evaluation to small coastal sections, ideally to single tidal basins, to minimize the sources of uncertainties (Bungenstock & Weerts 2010, 2012). We present data from two tidal basins, Langeoog and Jade Bay. For Langeoog a database derived from 600 cores, 68 km of Boomer seismic data, 33 radiocarbon ages and 8 OSL dates is available. (Bungenstock & Schäfer 2009, Mauz & Bungenstock 2007). For the Jade bay, the database comprises sedimentary markers, pollen and macro remains derived from 68 cores. The sedentary chronology is based on 54 radiocarbon ages and pollen constraints (Wartenberg & Freund 2011, Wartenberg et al. 2013). For both tidal basins the sedimentological record was interpreted in terms of the local paleogeographical development since about 7000 cal BP and its influence on the local relative sea-level curve. While the trend of the relative sea level is similar for both tidal basins, it shows a different altitude. The timing of the main marine transgression within the Langeoog area takes place ~3000 cal. BP whereas the sedimentological record of the Jade Bay shows two prominent transgressions, one for ~5000 cal. BP and one for ~3000 cal. BP. The Langeoog palaeo-environment is continuously characterised by marine influence. Within the Jade Bay two different palaeo-environments could be identified, documenting that from the West to the centre the landscape development in the Jade Bay was drainage driven feeding the associated fen peat with minerogenic water but being autonomous from isochronic relative sea-level. This all shows the importance to understand the differences of local landscape and depositional developments for a reliable interpretation of sea-level data. References Bungenstock, F. & Schäfer, A. (2009): The Holocene relative sea-level curve for the tidal basin of the barrier island Langeoog, German Bight, Southern North Sea. - Global and Planetary Change 66: 34-51. Bungenstock, F. & Weerts, H.J.T. (2011): The high-resolution Holocene sea-level curve for Nothwest Germany: global signals, local effects or data-artefacts? - International Journal of Earth Sciences 99: 1687-1706. Bungenstock, F. & Weerts, H.J.T. (2012): Holocene relative sea-level curves for the German North Sea coast. International Journal of Earth Sciences. ? - International Journal of Earth Sciences 101:1083-1099. Mauz, B. & Bungenstock, F. (2007):. How to reconstruct trends of late Holocene relative sea level: A new approach using tidal flat clastic sediments and optical dating. Marine Geology 237: 225-237. Wartenberg, W. & Freund, H. (2011): Late Pleistocene and Holocene sedimentary record within the Jade Bay, Lower Saxony, Northwest Germany - New aspects for the palaeoecological record. - Quaternary International:1-11. Wartenberg, W., Vött, A., Freund, H. Hadler, H., Frechen, M., Willershäuser, T., Schnaidt, S., Fischer, P. & Obrocki, L. (2013): Evidence of isochronic transgressive surfaces within the Jade Bay tidal flfl at area, southern German North Sea coast - Holocene event horizons of regional interest. - Zeitschrift für Geomorphologi, Supplementary Issue. DOI: 10.1127/0372-8854/2013/S-00150

  4. Long-term differences in extinction risk among the seven forms of rarity

    PubMed Central

    Harnik, Paul G.; Simpson, Carl; Payne, Jonathan L.

    2012-01-01

    Rarity is widely used to predict the vulnerability of species to extinction. Species can be rare in markedly different ways, but the relative impacts of these different forms of rarity on extinction risk are poorly known and cannot be determined through observations of species that are not yet extinct. The fossil record provides a valuable archive with which we can directly determine which aspects of rarity lead to the greatest risk. Previous palaeontological analyses confirm that rarity is associated with extinction risk, but the relative contributions of different types of rarity to extinction risk remain unknown because their impacts have never been examined simultaneously. Here, we analyse a global database of fossil marine animals spanning the past 500 million years, examining differential extinction with respect to multiple rarity types within each geological stage. We observe systematic differences in extinction risk over time among marine genera classified according to their rarity. Geographic range played a primary role in determining extinction, and habitat breadth a secondary role, whereas local abundance had little effect. These results suggest that current reductions in geographic range size will lead to pronounced increases in long-term extinction risk even if local populations are relatively large at present. PMID:23097507

  5. Long-term differences in extinction risk among the seven forms of rarity.

    PubMed

    Harnik, Paul G; Simpson, Carl; Payne, Jonathan L

    2012-12-22

    Rarity is widely used to predict the vulnerability of species to extinction. Species can be rare in markedly different ways, but the relative impacts of these different forms of rarity on extinction risk are poorly known and cannot be determined through observations of species that are not yet extinct. The fossil record provides a valuable archive with which we can directly determine which aspects of rarity lead to the greatest risk. Previous palaeontological analyses confirm that rarity is associated with extinction risk, but the relative contributions of different types of rarity to extinction risk remain unknown because their impacts have never been examined simultaneously. Here, we analyse a global database of fossil marine animals spanning the past 500 million years, examining differential extinction with respect to multiple rarity types within each geological stage. We observe systematic differences in extinction risk over time among marine genera classified according to their rarity. Geographic range played a primary role in determining extinction, and habitat breadth a secondary role, whereas local abundance had little effect. These results suggest that current reductions in geographic range size will lead to pronounced increases in long-term extinction risk even if local populations are relatively large at present.

  6. Measurement of the local food environment: a comparison of existing data sources.

    PubMed

    Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S

    2010-03-01

    Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.

  7. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  8. Database Selection: One Size Does Not Fit All.

    ERIC Educational Resources Information Center

    Allison, DeeAnn; McNeil, Beth; Swanson, Signe

    2000-01-01

    Describes a strategy for selecting a delivery method for electronic resources based on experiences at the University of Nebraska-Lincoln. Considers local conditions, pricing, feature options, hardware costs, and network availability and presents a model for evaluating the decision based on dollar requirements and local issues. (Author/LRW)

  9. 75 FR 22152 - Information Collection Sent to the Office of Management and Budget (OMB) for Approval; National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-27

    ... applications from State, local, or tribal governments; nonprofit, nongovernmental organizations; and academic institutions to advance the development of The National Map and other national geospatial databases. This... Respondents: State, local, and tribal governments; private and non-profit firms; and academic institutions...

  10. Students Investigate Local Communities with Geographic Information Systems (GIS).

    ERIC Educational Resources Information Center

    Carlstrom, Dick; Quinlan, Laurie A.

    1997-01-01

    Describes the use of Geographic Information Systems (GIS) in elementary and secondary school classrooms to analyze neighborhoods, cities, and regions. Discusses GIS software, databases, graphing data, and spatial analysis, and includes an example of a project for secondary school students investigating the local economy for summer jobs. (LRW)

  11. Forensic Tools to Track and Connect Physical Samples to Related Data

    NASA Astrophysics Data System (ADS)

    Molineux, A.; Thompson, A. C.; Baumgardner, R. W.

    2016-12-01

    Identifiers, such as local sample numbers, are critical to successfully connecting physical samples and related data. However, identifiers must be globally unique. The International Geo Sample Number (IGSN) generated when registering the sample in the System for Earth Sample Registration (SESAR) provides a globally unique alphanumeric code associated with basic metadata, related samples and their current physical storage location. When registered samples are published, users can link the figured samples to the basic metadata held at SESAR. The use cases we discuss include plant specimens from a Permian core, Holocene corals and derived powders, and thin sections with SEM stubs. Much of this material is now published. The plant taxonomic study from the core is a digital pdf and samples can be directly linked from the captions to the SESAR record. The study of stable isotopes from the corals is not yet digitally available, but individual samples are accessible. Full data and media records for both studies are located in our database where higher quality images, field notes, and section diagrams may exist. Georeferences permit mapping in current and deep time plate configurations. Several aspects emerged during this study. The first, ensure adequate and consistent details are registered with SESAR. Second, educate and encourage the researcher to obtain IGSNs. Third, publish the archive numbers, assigned prior to publication, alongside the IGSN. This provides access to further data through an Integrated Publishing Toolkit (IPT)/aggregators/or online repository databases, thus placing the initial sample in a much richer context for future studies. Fourth, encourage software developers to customize community software to extract data from a database and use it to register samples in bulk. This would improve workflow and provide a path for registration of large legacy collections.

  12. PATIKAweb: a Web interface for analyzing biological pathways through advanced querying and visualization.

    PubMed

    Dogrusoz, U; Erson, E Z; Giral, E; Demir, E; Babur, O; Cetintas, A; Colak, R

    2006-02-01

    Patikaweb provides a Web interface for retrieving and analyzing biological pathways in the Patika database, which contains data integrated from various prominent public pathway databases. It features a user-friendly interface, dynamic visualization and automated layout, advanced graph-theoretic queries for extracting biologically important phenomena, local persistence capability and exporting facilities to various pathway exchange formats.

  13. Modification Site Localization in Peptides.

    PubMed

    Chalkley, Robert J

    2016-01-01

    There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.

  14. A fragile zero watermarking scheme to detect and characterize malicious modifications in database relations.

    PubMed

    Khan, Aihab; Husain, Syed Afaq

    2013-01-01

    We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.

  15. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  17. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  18. Current management of locally advanced head and neck cancer: the combination of chemotherapy with locoregional treatments.

    PubMed

    Bar-Ad, Voichita; Palmer, Joshua; Yang, Hushan; Cognetti, David; Curry, Joseph; Luginbuhl, Adam; Tuluc, Madalina; Campling, Barbara; Axelrod, Rita

    2014-12-01

    This review will discuss the evolution of the role of chemotherapy in the treatment of locally advanced head and neck cancer (HNC), over the last few decades. Studies were identified by searching PubMed electronic databases. Surgery followed by radiotherapy (RT) or definitive RT are potentially curative approaches for locally advanced HNC. While chemotherapy itself is not curative, it can improve cure rates when given as an adjunct to RT. The benefit of combining chemotherapy with RT is related to the timing of the chemotherapy. Several prospective randomized trials have demonstrated that concurrent delivery of chemotherapy and RT (CRT) is the most promising approach, given that locoregional recurrence is the leading pattern of failure for patients with locally advanced HNC. Induction chemotherapy before CRT has not been shown to be superior to CRT alone and the added toxicity may negatively impact the compliance with CRT. Sequential chemotherapy administration, in the form of induction chemotherapy followed by RT or CRT, has been successful as a strategy for organ preservation in patients with potentially resectable laryngeal and hypopharyngeal cancer. Systemic chemotherapy delivered concurrently with RT is used as a standard treatment for locally advanced HNC. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. A Database Management System Application for the Graduate Programs Office of the School of Systems and Logistics. Volume 2. Technical Reference Manual

    DTIC Science & Technology

    1988-09-01

    CIT C 15 Name of local city. InCSrATE C 2 Name of local state as tw letter abbreviatiom. SIC ZIP C 10 Loa ZIP code. Five or nine digits . InC_ PHKtE C 15...record: 10 Database Dictimary for C: \\ BASE\\PAS1E.MF Field Nane Type Width Decimal Coments PMSCODE C 2 Third and fourth digits of PAS code. ON C 3...Version: 3.01 Date: 09/01/88 Time: 21:34 Report Libary : C: ASE\\GPO.RP1 Date: 08/28/88 Time: 11:32 PRMNT OFTICNS CflRL-PRINrM Nmber of copies: 1 Starting

  20. Image Display in Local Database Networks

    NASA Astrophysics Data System (ADS)

    List, James S.; Olson, Frederick R.

    1989-05-01

    Dearchival of image data in the form of x-ray film provides a major challenge for radiology departments. In highly active referral environments such as tertiary care hospitals, patients may be referred to multiple clinical subspecialists within a very short time. Each clinical subspecialist frequently requires diagnostic image data to complete the diagnosis. This need for image access often interferes with the normal process of film handling and interpretation, subsequently reducing the efficiency of the department. The concept of creating a local image database on individual nursing stations utilizing the AT&T CommView Results Viewing Station (RVS) is being evaluated. Initial physician acceptance has been favorable. Objective measurements of operational productivity enhancements are in progress.

  1. Local news media framing of obesity in the context of a sugar-sweetened beverage reduction media campaign.

    PubMed

    Jeong, Michelle; Gilmore, Joelle Sano; Bleakley, Amy; Jordan, Amy

    2014-01-01

    This study examined local news media's framing of obesity preceding and surrounding the Philadelphia sugar-sweetened beverage reduction media campaign. Using key search terms pertaining to obesity and sugary beverages, the authors searched the LexisNexis database and gathered local news stories (n = 167) that were aired or published between October, 2010 and March, 2011. They conducted a content analysis, coding for framing-related outcome measures (underlying factors, action steps, and contextual agents). Overall, the news media employed individual-level framing in the majority of stories when discussing obesity, both before and after the campaign launch. After the campaign launched, however, stories were significantly more likely to mention systemic-level contextual agents such as food companies (P = .008), beverage companies (P = .03), and champions or advocates (P = .001). The researchers observed a shift in the local news media discourse toward more thematic framing of obesity, and suggest that public health officials consider the potential impact of news media frames on garnering public support for future policy implementations. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  2. A searching and reporting system for relational databases using a graph-based metadata representation.

    PubMed

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  3. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  4. R2 & NE Block Group - 2010 Census; Housing and Population Summary

    EPA Pesticide Factsheets

    The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Block Groups (BGs) are defined before tabulation block delineation and numbering, but are clusters of blocks within the same census tract that have the same first digit of their 4-digit census block number from the same decennial census. For example, Census 2000 tabulation blocks 3001, 3002, 3003,.., 3999 within Census 2000 tract 1210.02 are also within BG 3 within that census tract. Census 2000 BGs generally contained between 600 and 3,000 people, with an optimum size of 1,500 people. Most BGs were delineated by local participants in the Census Bureau's Participant Statistical Areas Program (PSAP). The Census Bureau delineated BGs only where the PSAP participant declined to delineate BGs or where the Census Bureau could not identify any local PSAP participant. A BG usually covers a contiguous area. Each census tract contains at least one BG, and BGs are uniquely numbered within census tract. Within the standard census geographic hierarchy, BGs never cross

  5. Management of pancreatic pseudocysts—A retrospective analysis

    PubMed Central

    Nötzel, Bärbel; Phillip, Veit; Lahmer, Tobias; Schmid, Roland M.; Algül, Hana

    2017-01-01

    Background Pancreatic pseudocysts arise mostly in patients with alcohol induced chronic pancreatitis causing various symptoms and complications. However, data on the optimal management are rare. To address this problem, we analysed patients with pancreatic pseudocysts treated at our clinic retrospectively. Methods We searched our clinical database for the diagnosis pancreatitis from 2004 till 2014, selected patients with pseudocysts larger than 10 mm and entered all relevant information in a database for statistical analysis. Results In total, 129 patients with pancreatic pseudocysts were treated at our institution during the study period. Most patients suffered from alcohol induced chronic pancreatitis (43.4%; 56/129). Pseudocysts were more frequent in female than in male (2:1) and were mainly located in the pancreatic head (47.3%; 61/129). Local complications like obstructive jaundice were associated with the diameter of the cysts (AUC 0.697 in ROC-curve analysis). However, even cysts up to a diameter of 160 mm can regress spontaneously. Besides a lower re-intervention rate in surgically treated patients, endoscopic, percutaneous and surgical drainage are equally effective. Most treatment related complications occur in large pseudocysts located in the pancreatic head. Conclusion Conservative management of large pseudocysts is successful in many patients. Therefore, indication for treatment should be made carefully considering the presence and risk of local complications. Endoscopic and surgical drainage are equally effective. PMID:28877270

  6. Preliminary Aeromagnetic Map of Joshua Tree National Park and Vicinity, Southern California

    USGS Publications Warehouse

    Langenheim, V.E.; Hill, P.L.

    2010-01-01

    This aeromagnetic map of Joshua Tree National Park and vicinity is intended to promote further understanding of the geology and structure in the region by serving as a basis for geophysical interpretations and by supporting geological mapping, water-resource investigations, and various topical studies. Local spatial variations in the Earth's magnetic field (evident as anomalies on aeromagnetic maps) reflect the distribution of magnetic minerals, primarily magnetite, in the underlying rocks. In many cases the volume content of magnetic minerals can be related to rock type, and abrupt spatial changes in the amount of magnetic minerals commonly mark lithologic or structural boundaries. Bodies of mafic and ultramafic rocks tend to produce the most intense magnetic anomalies, but such generalizations must be applied with caution because rocks with more felsic compositions, or even some sedimentary units, also can cause measurable magnetic anomalies. The database includes two ASCII files containing new aeromagnetic data and two ASCII files with point locations of the local maximum horizontal gradient derived from the aeromagnetic data. This metadata file describes the horizontal gradient locations derived from new and existing aeromagnetic data. This aeromagnetic map identifies magnetic features as a basis for geophysical interpretations; the gradients help define the edges of magnetic sources. This database updates geophysical information originally presented in smaller-scale formats and includes detailed aeromagnetic data collected by EON Geosciences, Inc.

  7. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    PubMed

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  8. An evaluation of medical knowledge contained in Wikipedia and its use in the LOINC database.

    PubMed

    Friedlin, Jeff; McDonald, Clement J

    2010-01-01

    The logical observation identifiers names and codes (LOINC) database contains 55 000 terms consisting of more atomic components called parts. LOINC carries more than 18 000 distinct parts. It is necessary to have definitions/descriptions for each of these parts to assist users in mapping local laboratory codes to LOINC. It is believed that much of this information can be obtained from the internet; the first effort was with Wikipedia. This project focused on 1705 laboratory analytes (the first part in the LOINC laboratory name). Of the 1705 parts queried, 1314 matching articles were found in Wikipedia. Of these, 1299 (98.9%) were perfect matches that exactly described the LOINC part, 15 (1.14%) were partial matches (the description in Wikipedia was related to the LOINC part, but did not describe it fully), and 102 (7.76%) were mis-matches. The current release of RELMA and LOINC include Wikipedia descriptions of LOINC parts obtained as a direct result of this project.

  9. Fishing for biodiversity: Novel methanopterin-linked C1 transfergenes deduced from the Sargasso Sea metagenome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalyuzhnaya, Marina G.; Nercessian, Olivier; Lapidus, Alla

    2004-07-01

    The recently generated database of microbial genes from anoligotrophic environment populated by a calculated 1,800 of major phylotypes (the Sargasso Sea metagenome) presents a great source for expanding local databases of genes indicative of a specific function. In this paper we analyze the Sargasso Sea metagenome in terms of the presence of methanopterin-linked C1 transfer genes that are signature for methylotrophy. We conclude that more than 10 phylotypes possessing genes of interest are present in this environment, and a few of these are relatively abundant species. The sequences representative of the major phylotypes do not appear to belong to anymore » known microbial group capable of methanopterin-linked C1 transfer. Instead, they separate from all known sequences on phylogenetic trees, pointing towards their affiliation with a novel microbial phylum. These data imply a broader distribution of methanopterin-linked functions in the microbial world than previously known.« less

  10. Does language matter? A case study of epidemiological and public health journals, databases and professional education in French, German and Italian

    PubMed Central

    Baussano, Iacopo; Brzoska, Patrick; Fedeli, Ugo; Larouche, Claudia; Razum, Oliver; Fung, Isaac C-H

    2008-01-01

    Epidemiology and public health are usually context-specific. Journals published in different languages and countries play a role both as sources of data and as channels through which evidence is incorporated into local public health practice. Databases in these languages facilitate access to relevant journals, and professional education in these languages facilitates the growth of native expertise in epidemiology and public health. However, as English has become the lingua franca of scientific communication in the era of globalisation, many journals published in non-English languages face the difficult dilemma of either switching to English and competing internationally, or sticking to the native tongue and having a restricted circulation among a local readership. This paper discusses the historical development of epidemiology and the current scene of epidemiological and public health journals, databases and professional education in three Western European languages: French, German and Italian, and examines the dynamics and struggles they have today. PMID:18826570

  11. Why Save Your Course as a Relational Database?

    ERIC Educational Resources Information Center

    Hamilton, Gregory C.; Katz, David L.; Davis, James E.

    2000-01-01

    Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)

  12. Simple Logic for Big Problems: An Inside Look at Relational Databases.

    ERIC Educational Resources Information Center

    Seba, Douglas B.; Smith, Pat

    1982-01-01

    Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…

  13. Prelude and Fugue, predicting local protein structure, early folding regions and structural weaknesses.

    PubMed

    Kwasigroch, Jean Marc; Rooman, Marianne

    2006-07-15

    Prelude&Fugue are bioinformatics tools aiming at predicting the local 3D structure of a protein from its amino acid sequence in terms of seven backbone torsion angle domains, using database-derived potentials. Prelude(&Fugue) computes all lowest free energy conformations of a protein or protein region, ranked by increasing energy, and possibly satisfying some interresidue distance constraints specified by the user. (Prelude&)Fugue detects sequence regions whose predicted structure is significantly preferred relative to other conformations in the absence of tertiary interactions. These programs can be used for predicting secondary structure, tertiary structure of short peptides, flickering early folding sequences and peptides that adopt a preferred conformation in solution. They can also be used for detecting structural weaknesses, i.e. sequence regions that are not optimal with respect to the tertiary fold. http://babylone.ulb.ac.be/Prelude_and_Fugue.

  14. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  15. e-Labs and Work Objects: Towards Digital Health Economies

    NASA Astrophysics Data System (ADS)

    Ainsworth, John D.; Buchan, Iain E.

    The optimal provision of healthcare and public health services requires the synthesis of evidence from multiple disciplines. It is necessary to understand the genetic, environmental, behavioural and social determinants of disease and health-related states; to balance the effectiveness of interventions with their costs; to ensure the maximum safety and acceptability of interventions; and to provide fair access to care services for given populations. Ever expanding databases of knowledge and local health information, and the ability to employ computationally expensive methods, promises much for decisions to be both supported by best evidence and locally relevant. This promise will, however, not be realised without providing health professionals with the tools to make sense of this information rich environment and to collaborate across disciplines. We propose, as a solution to this problem, the e-Lab and Work Objects model as a sense-making platform for digital health economies - bringing together data, methods and people for timely health intelligence.

  16. A data skimming service for locally resident analysis data

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Gardner, R. W.; Gieraltowski, J.; Malon, D.; Mambelli, M.; May, E.

    2008-07-01

    A Data Skimming Service (DSS) is a site-level service for rapid event filtering and selection from locally resident datasets based on metadata queries to associated 'tag' databases. In US ATLAS, we expect most if not all of the AOD-based datasets to be replicated to each of the five Tier 2 regional facilities in the US Tier 1 'cloud' coordinated by Brookhaven National Laboratory. Entire datasets will consist of on the order of several terabytes of data, and providing easy, quick access to skimmed subsets of these data will be vital to physics working groups. Typically, physicists will be interested in portions of the complete datasets, selected according to event-level attributes (number of jets, missing Et, etc) and content (specific analysis objects for subsequent processing). In this paper we describe methods used to classify data (metadata tag generation) and to store these results in a local database. Next we discuss a general framework which includes methods for accessing this information, defining skims, specifying event output content, accessing locally available storage through a variety of interfaces (SRM, dCache/dccp, gridftp), accessing remote storage elements as specified, and user job submission tools through local or grid schedulers. The advantages of the DSS are the ability to quickly 'browse' datasets and design skims, for example, pre-adjusting cuts to get to a desired skim level with minimal use of compute resources, and to encode these analysis operations in a database for re-analysis and archival purposes. Additionally the framework has provisions to operate autonomously in the event that external, central resources are not available, and to provide, as a reduced package, a minimal skimming service tailored to the needs of small Tier 3 centres or individual users.

  17. Relational Database Design in Information Science Education.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  18. Sharing Water Data to Encourage Sustainable Choices in Areas of the Marcellus Shale

    NASA Astrophysics Data System (ADS)

    Brantley, S. L.; Abad, J. D.; Vastine, J.; Yoxtheimer, D.; Wilderman, C.; Vidic, R.; Hooper, R. P.; Brasier, K.

    2012-12-01

    Natural gas sourced from shales but stored in more permeable formations has long been exploited as an energy resource. Now, however, gas is exploited directly from the low-porosity and low-permeability shale reservoirs through the use of hydrofracturing. Hydrofracturing is not a new technique: it has long been utilized in the energy industry to promote flow of oil and gas from traditional reservoirs. To exploit gas in reservoirs such as the Marcellus shale in PA, hydrofracturing is paired with directional drilling. Such hydrofracturing utilizes large volumes of water to increase porosity in the shale formations at depth. Small concentrations of chemicals are added to the water to improve the formation and maintenance of the fractures. Significant public controversy has developed in response to the use of hydrofracturing especially in the northeastern states underlain by the Marcellus shale where some citizens and scientists question whether shale gas recovery will contaminate local surface and ground waters. Researchers, government agencies, and citizen scientists in Pennsylvania are teaming up to run the ShaleNetwork (www.shalenetwork.org), an NSF-funded research collaboration network that is currently finding, collating, sharing, publishing, and exploring data related to water quality and quantity in areas that are exploiting shale gas. The effort, focussed initially on Pennsylvania, is now developing the ShaleNetwork database that can be accessed through HydroDesktop in the CUAHSI Hydrologic Information System. In the first year since inception, the ShaleNetwork ran a workshop and reached eight conclusions, largely focussed on issues related to the sources, entry, and use of data. First, the group discovered that extensive water data is available in areas of shale gas. Second, participants agreed that the Shale Network team should partner with state agencies and industry to move datasets online. Third, participants discovered that the database allows participants to assess data gaps. Fourth, the team was encouraged to search for data that plug gaps. Fifth, the database should be easily sustained by others long-term if the Shale Network team simplifies the process of uploading data and finds ways to create community buy-in or incentives for data uploads. Sixth, the database itself and the workshops for the database should drive future agreement about analytical protocols. Seventh, the database is already encouraging other groups to publish data online. Finally, a user interface is needed that is easier and more accessible for citizens to use. Overall, it is clear that sharing data is one way to build bridges among decision makers, scientists, and citizens to understand issues related to sustainable development of energy resources in the face of issues related to water quality and quantity.

  19. Mouse IDGenes: a reference database for genetic interactions in the developing mouse brain

    PubMed Central

    Matthes, Michaela; Preusse, Martin; Zhang, Jingzhong; Schechter, Julia; Mayer, Daniela; Lentes, Bernd; Theis, Fabian; Prakash, Nilima; Wurst, Wolfgang; Trümbach, Dietrich

    2014-01-01

    The study of developmental processes in the mouse and other vertebrates includes the understanding of patterning along the anterior–posterior, dorsal–ventral and medial– lateral axis. Specifically, neural development is also of great clinical relevance because several human neuropsychiatric disorders such as schizophrenia, autism disorders or drug addiction and also brain malformations are thought to have neurodevelopmental origins, i.e. pathogenesis initiates during childhood and adolescence. Impacts during early neurodevelopment might also predispose to late-onset neurodegenerative disorders, such as Parkinson’s disease. The neural tube develops from its precursor tissue, the neural plate, in a patterning process that is determined by compartmentalization into morphogenetic units, the action of local signaling centers and a well-defined and locally restricted expression of genes and their interactions. While public databases provide gene expression data with spatio-temporal resolution, they usually neglect the genetic interactions that govern neural development. Here, we introduce Mouse IDGenes, a reference database for genetic interactions in the developing mouse brain. The database is highly curated and offers detailed information about gene expressions and the genetic interactions at the developing mid-/hindbrain boundary. To showcase the predictive power of interaction data, we infer new Wnt/β-catenin target genes by machine learning and validate one of them experimentally. The database is updated regularly. Moreover, it can easily be extended by the research community. Mouse IDGenes will contribute as an important resource to the research on mouse brain development, not exclusively by offering data retrieval, but also by allowing data input. Database URL: http://mouseidgenes.helmholtz-muenchen.de. PMID:25145340

  20. International Database of Volcanic Ash Impacts

    NASA Astrophysics Data System (ADS)

    Wallace, K.; Cameron, C.; Wilson, T. M.; Jenkins, S.; Brown, S.; Leonard, G.; Deligne, N.; Stewart, C.

    2015-12-01

    Volcanic ash creates extensive impacts to people and property, yet we lack a global ash impacts catalog to organize, distribute, and archive this important information. Critical impact information is often stored in ephemeral news articles or other isolated resources, which cannot be queried or located easily. A global ash impacts database would improve 1) warning messages, 2) public and lifeline emergency preparation, and 3) eruption response and recovery. Ashfall can have varying consequences, such as disabling critical lifeline infrastructure (e.g. electrical generation and transmission, water supplies, telecommunications, aircraft and airports) or merely creating limited and expensive inconvenience to local communities. Impacts to the aviation sector can be a far-reaching global issue. The international volcanic ash impacts community formed a committee to develop a database to catalog the impacts of volcanic ash. We identify three user populations for this database: 1) research teams, who would use the database to assist in systematic collection, recording, and storage of ash impact data, and to prioritize impact assessment trips and lab experiments 2) volcanic risk assessment scientists who rely on impact data for assessments (especially vulnerability/fragility assessments); a complete dataset would have utility for global, regional, national and local scale risk assessments, and 3) citizen science volcanic hazard reporting. Publication of an international ash impacts database will encourage standardization and development of best practices for collecting and reporting impact information. Data entered will be highly categorized, searchable, and open source. Systematic cataloging of impact data will allow users to query the data and extract valuable information to aid in the development of improved emergency preparedness, response and recovery measures.

  1. Real-Time Ligand Binding Pocket Database Search Using Local Surface Descriptors

    PubMed Central

    Chikhi, Rayan; Sael, Lee; Kihara, Daisuke

    2010-01-01

    Due to the increasing number of structures of unknown function accumulated by ongoing structural genomics projects, there is an urgent need for computational methods for characterizing protein tertiary structures. As functions of many of these proteins are not easily predicted by conventional sequence database searches, a legitimate strategy is to utilize structure information in function characterization. Of a particular interest is prediction of ligand binding to a protein, as ligand molecule recognition is a major part of molecular function of proteins. Predicting whether a ligand molecule binds a protein is a complex problem due to the physical nature of protein-ligand interactions and the flexibility of both binding sites and ligand molecules. However, geometric and physicochemical complementarity is observed between the ligand and its binding site in many cases. Therefore, ligand molecules which bind to a local surface site in a protein can be predicted by finding similar local pockets of known binding ligands in the structure database. Here, we present two representations of ligand binding pockets and utilize them for ligand binding prediction by pocket shape comparison. These representations are based on mapping of surface properties of binding pockets, which are compactly described either by the two dimensional pseudo-Zernike moments or the 3D Zernike descriptors. These compact representations allow a fast real-time pocket searching against a database. Thorough benchmark study employing two different datasets show that our representations are competitive with the other existing methods. Limitations and potentials of the shape-based methods as well as possible improvements are discussed. PMID:20455259

  2. Real-time ligand binding pocket database search using local surface descriptors.

    PubMed

    Chikhi, Rayan; Sael, Lee; Kihara, Daisuke

    2010-07-01

    Because of the increasing number of structures of unknown function accumulated by ongoing structural genomics projects, there is an urgent need for computational methods for characterizing protein tertiary structures. As functions of many of these proteins are not easily predicted by conventional sequence database searches, a legitimate strategy is to utilize structure information in function characterization. Of particular interest is prediction of ligand binding to a protein, as ligand molecule recognition is a major part of molecular function of proteins. Predicting whether a ligand molecule binds a protein is a complex problem due to the physical nature of protein-ligand interactions and the flexibility of both binding sites and ligand molecules. However, geometric and physicochemical complementarity is observed between the ligand and its binding site in many cases. Therefore, ligand molecules which bind to a local surface site in a protein can be predicted by finding similar local pockets of known binding ligands in the structure database. Here, we present two representations of ligand binding pockets and utilize them for ligand binding prediction by pocket shape comparison. These representations are based on mapping of surface properties of binding pockets, which are compactly described either by the two-dimensional pseudo-Zernike moments or the three-dimensional Zernike descriptors. These compact representations allow a fast real-time pocket searching against a database. Thorough benchmark studies employing two different datasets show that our representations are competitive with the other existing methods. Limitations and potentials of the shape-based methods as well as possible improvements are discussed.

  3. Remote monitoring of patients with implanted devices: data exchange and integration.

    PubMed

    Van der Velde, Enno T; Atsma, Douwe E; Foeken, Hylke; Witteman, Tom A; Hoekstra, Wybo H G J

    2013-06-01

    Remote follow-up of implanted implantable cardioverter defibrillators (ICDs) may offer a solution to the problem of overcrowded outpatient clinics, and may also be effective in detecting clinical events early. Data obtained from remote follow up systems, as developed by all major device companies, are stored in a central database system, operated and owned by the device company. A problem now arises that the patient's clinical information is partly stored in the local electronic health record (EHR) system in the hospital, and partly in the remote monitoring database, which may potentially result in patient safety issues. To address the requirement of integrating remote monitoring data in the local EHR, the Integrating the Healthcare Enterprise (IHE) Implantable Device Cardiac Observation (IDCO) profile has been developed. This IHE IDCO profile has been adapted by all major device companies. In our hospital, we have implemented the IHE IDCO profile to import data from the remote databases from two device vendors into the departmental Cardiology Information System (EPD-Vision). Data is exchanged via a HL7/XML communication protocol, as defined in the IHE IDCO profile. By implementing the IHE IDCO profile, we have been able to integrate the data from the remote monitoring databases in our local EHRs. It can be expected that remote monitoring systems will develop into dedicated monitoring and therapy platforms. Data retrieved from these systems should form an integral part of the electronic patient record as more and more out-patient clinic care will shift to personalized care provided at a distance, in other words at the patient's home.

  4. Unsupervised symmetrical trademark image retrieval in soccer telecast using wavelet energy and quadtree decomposition

    NASA Astrophysics Data System (ADS)

    Ong, Swee Khai; Lim, Wee Keong; Soo, Wooi King

    2013-04-01

    Trademark, a distinctive symbol, is used to distinguish products or services provided by a particular person, group or organization from other similar entries. As trademark represents the reputation and credit standing of the owner, it is important to differentiate one trademark from another. Many methods have been proposed to identify, classify and retrieve trademarks. However, most methods required features database and sample sets for training prior to recognition and retrieval process. In this paper, a new feature on wavelet coefficients, the localized wavelet energy, is introduced to extract features of trademarks. With this, unsupervised content-based symmetrical trademark image retrieval is proposed without the database and prior training set. The feature analysis is done by an integration of the proposed localized wavelet energy and quadtree decomposed regional symmetrical vector. The proposed framework eradicates the dependence on query database and human participation during the retrieval process. In this paper, trademarks for soccer games sponsors are the intended trademark category. Video frames from soccer telecast are extracted and processed for this study. Reasonably good localization and retrieval results on certain categories of trademarks are achieved. A distinctive symbol is used to distinguish products or services provided by a particular person, group or organization from other similar entries.

  5. Perceptions of the uses of routine general practice data beyond individual care in England: a qualitative study.

    PubMed

    Wyatt, David; Cook, Jenny; McKevitt, Christopher

    2018-01-08

    To investigate how different lay and professional groups perceive and understand the use of routinely collected general practice patient data for research, public health, service evaluation and commissioning. We conducted a multimethod, qualitative study. This entailed participant observation of the design and delivery of a series of deliberative engagement events about a local patient database made of routine primary care data. We also completed semistructured interviews with key professionals involved in the database. Qualitative data were thematically analysed. The research took place in an inner city borough in England. Of the community groups who participated in the six engagement events (111 individual citizens), five were health focused. It was difficult to recruit other types of organisations. Participants supported the uses of the database, but it was unclear how well they understood its scope and purpose. They had concerns about transparency, security and the potential misuse of data. Overall, they were more focused on the need for immediate investment in primary care capacity than data infrastructures to improve future health. The 10 interviewed professionals identified the purpose of the database in different ways, according to their interests. They emphasised the promise of the database as a resource in health research in its own right and in linking it to other datasets. Findings demonstrate positivity to the uses of this local database, but a disconnect between the long-term purposes of the database and participants' short-term priorities for healthcare quality. Varying understandings of the database and the potential for it to be used in multiple different ways in the future cement a need for systematic and routine public engagement to develop and maintain public awareness. Problems recruiting community groups signal a need to consider how we engage wider audiences more effectively. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Perceptions of the uses of routine general practice data beyond individual care in England: a qualitative study

    PubMed Central

    Cook, Jenny; McKevitt, Christopher

    2018-01-01

    Objective To investigate how different lay and professional groups perceive and understand the use of routinely collected general practice patient data for research, public health, service evaluation and commissioning. Design, method, participants and setting We conducted a multimethod, qualitative study. This entailed participant observation of the design and delivery of a series of deliberative engagement events about a local patient database made of routine primary care data. We also completed semistructured interviews with key professionals involved in the database. Qualitative data were thematically analysed. The research took place in an inner city borough in England. Results Of the community groups who participated in the six engagement events (111 individual citizens), five were health focused. It was difficult to recruit other types of organisations. Participants supported the uses of the database, but it was unclear how well they understood its scope and purpose. They had concerns about transparency, security and the potential misuse of data. Overall, they were more focused on the need for immediate investment in primary care capacity than data infrastructures to improve future health. The 10 interviewed professionals identified the purpose of the database in different ways, according to their interests. They emphasised the promise of the database as a resource in health research in its own right and in linking it to other datasets. Conclusions Findings demonstrate positivity to the uses of this local database, but a disconnect between the long-term purposes of the database and participants’ short-term priorities for healthcare quality. Varying understandings of the database and the potential for it to be used in multiple different ways in the future cement a need for systematic and routine public engagement to develop and maintain public awareness. Problems recruiting community groups signal a need to consider how we engage wider audiences more effectively. PMID:29317420

  7. Data-Based Locally Directed Evaluation of Vocational Education Programs. Component 3. Assessing Student Career Interests.

    ERIC Educational Resources Information Center

    Florida State Univ., Tallahassee. Program of Vocational Education.

    Part of a system by which local education agency (LEA) personnel may evaluate secondary and postsecondary vocational education programs, this third of eight components focuses on assessment of student career interests. Procedures covered include determination of the student population to be surveyed, selection of the instrument, and discrepancy…

  8. Disaster management and the critical thinking skills of local emergency managers: correlations with age, gender, education, and years in occupation.

    PubMed

    Peerbolte, Stacy L; Collins, Matthew Lloyd

    2013-01-01

    Emergency managers must be able to think critically in order to identify and anticipate situations, solve problems, make judgements and decisions effectively and efficiently, and assume and manage risk. Heretofore, a critical thinking skills assessment of local emergency managers had yet to be conducted that tested for correlations among age, gender, education, and years in occupation. An exploratory descriptive research design, using the Watson-Glaser Critical Thinking Appraisal-Short Form (WGCTA-S), was employed to determine the extent to which a sample of 54 local emergency managers demonstrated the critical thinking skills associated with the ability to assume and manage risk as compared to the critical thinking scores of a group of 4,790 peer-level managers drawn from an archival WGCTA-S database. This exploratory design suggests that the local emergency managers, surveyed in this study, had lower WGCTA-S critical thinking scores than their equivalents in the archival database with the exception of those in the high education and high experience group. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  9. The global record of local iron geochemical data from Proterozoic through Paleozoic basins

    NASA Astrophysics Data System (ADS)

    Sperling, E. A.; Wolock, C.; Johnston, D. T.; Knoll, A. H.

    2013-12-01

    Iron-based redox proxies represent one of the most mature tools available to sedimentary geochemists. These techniques, which benefit from decades of refinement, are based on the fact that rocks deposited under anoxic conditions tend to be enriched in highly-reactive iron. However, there are myriad local controls on the development of anoxia, and no local section is an exemplar for the global ocean. The global signal must thus be determined using techniques like those developed to solve an analogous problem in paleobiology: the inference of global diversity patterns through time from faunas seen in local stratigraphic sections. Here we analyze a dataset of over 4000 iron speciation measurements (including over 600 de novo analyses) to better understand redox changes from the Proterozoic through the Paleozoic Era. Preliminary database analyses yield interesting observations. We find that although anoxic water columns in the middle Proterozoic were dominantly ferruginous, there was a statistical tendency towards euxinia not seen in early Neoproterozoic or Ediacaran data. Also, we find that in the Neoproterozoic oceans, oxic depositional environments-the likely home for early animals-have exceptionally low pyrite contents, and by inference low levels of porewater sulfide. This runs contrary to notions of sulfide stress on early metazoans. Finally, the current database of iron speciation data does not support an Ediacaran or Cambrian oxygenation event. This conclusion is of course only as sharp as the ability of the Fe-proxy database to track dissolved oxygen and does not rule out the possibility of a small-magnitude change in oxygen. It does suggest, however, that if changing pO2 facilitated animal diversification it did so by a limited rise past critical ecological thresholds, such as seen in the modern Oxygen Minimum Zones benthos. Oxygen increase to modern levels thus becomes a Paleozoic problem, and one in need of better sampling if a database approach is to be employed.

  10. Risk of Infection After Intra-articular Steroid Injection at the Time of Ankle Arthroscopy in a Medicare Population.

    PubMed

    Werner, Brian C; Cancienne, Jourdan M; Burrus, M Tyrrell; Park, Joseph S; Perumal, Venkat; Cooper, M Truitt

    2016-02-01

    To employ a national database to evaluate the association between intraoperative corticosteroid injection at the time of ankle arthroscopy and postoperative infection rates in Medicare patients. A national insurance database was queried for Medicare patients who underwent ankle arthroscopy, including arthroscopic removal of loose body, synovectomy, and limited or extensive debridement. Two groups were created: ankle arthroscopy with concomitant local steroid injection (n = 459) and a control group of patients who underwent ankle arthroscopy without intraoperative local steroid injection (n = 9,327). The demographics and Charlson Comorbidity Index of each group were compared. Infection rates within 6 months postoperatively were assessed using International Classification of Diseases, 9th revision, and Current Procedural Terminology codes and compared between groups using χ(2)-tests. A total of 9,786 unique patients who underwent ankle arthroscopy were included in the study. There were no statistically significant differences between the steroid injection study group and controls for the assessed infection-related variables, including gender, age group, obesity, smoking, and average Charlson Comorbidity Index. The infection rate for patients who had a local steroid injection at the time of surgery was 3.9% (18/459 patients), compared with 1.8% (168/9,327 patients) in the control group (odds ratio, 2.2; 95% confidence interval, 1.4 to 3.7; P = .002.) The majority of this difference was noted between the 65 and 79 years age groups. The use of intraoperative intraarticular corticosteroid injection at the time of ankle arthroscopy in Medicare patients is associated with significantly increased rates of postoperative infection compared with controls without intraoperative steroid injections. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  11. A population-based analysis of injury-related deaths and access to trauma care in rural-remote Northwest British Columbia.

    PubMed

    Simons, Richard; Brasher, Penelope; Taulu, Tracey; Lakha, Nasira; Molnar, Nadine; Caron, Nadine; Schuurman, Nadine; Evans, David; Hameed, Morad

    2010-07-01

    Injury rates and injury mortality rates are generally higher in rural and remote communities compared with urban jurisdictions as has been shown to be the case in the rural-remote area of Northwest (NW) British Columbia (BC). The purpose of study was to identify: (1) the place and timing of death following injury in NW BC, (2) access to and quality of local trauma services, and (3) opportunities to improve trauma outcomes. Quantitative data from demographic and geographic databases, the BC Trauma Registry, Hospital discharge abstract database, and the BC Coroner's Office, along with qualitative data from chart reviews of selected major trauma cases, and interviews with front-line trauma care providers were collated and analyzed for patients sustaining injury in NW BC from April 2001 to March 2006. The majority of trauma deaths (82%) in NW BC occur prehospital. Patients arriving alive to NW hospitals have low hospital mortality (1.0%), and patients transferring from NW BC to tertiary centers have better outcomes than matched patients achieving direct entry into the tertiary center by way of geographic proximity. Access to local trauma services was compromised by: incident discovery, limited phone service (land lines/cell), incomplete 911 emergency medical services system access, geographical and climate challenges compounded by limited transportation options, airport capabilities and paramedic training level, dysfunctional hospital no-refusal policies, lack of a hospital destination policies, and lack of system leadership and coordination. Improving trauma outcomes in this rural-remote jurisdiction requires a systems approach to address root causes of delays in access to care, focusing on improved access to emergency medical services, hospital bypass and destination protocols, improved transportation options, advanced life support transfer capability, and designated, coordinated local trauma services.

  12. Electromagnetic Navigational Bronchoscopy Reduces the Time Required for Localization and Resection of Lung Nodules.

    PubMed

    Bolton, William David; Cochran, Thomas; Ben-Or, Sharon; Stephenson, James E; Ellis, William; Hale, Allyson L; Binks, Andrew P

    The aims of the study were to evaluate electromagnetic navigational bronchoscopy (ENB) and computed tomography-guided placement as localization techniques for minimally invasive resection of small pulmonary nodules and determine whether electromagnetic navigational bronchoscopy is a safer and more effective method than computed tomography-guided localization. We performed a retrospective review of our thoracic surgery database to identify patients who underwent minimally invasive resection for a pulmonary mass and used either electromagnetic navigational bronchoscopy or computed tomography-guided localization techniques between July 2011 and May 2015. Three hundred eighty-three patients had a minimally invasive resection during our study period, 117 of whom underwent electromagnetic navigational bronchoscopy or computed tomography localization (electromagnetic navigational bronchoscopy = 81; computed tomography = 36). There was no significant difference between computed tomography and electromagnetic navigational bronchoscopy patient groups with regard to age, sex, race, pathology, nodule size, or location. Both computed tomography and electromagnetic navigational bronchoscopy were 100% successful at localizing the mass, and there was no difference in the type of definitive surgical resection (wedge, segmentectomy, or lobectomy) (P = 0.320). Postoperative complications occurred in 36% of all patients, but there were no complications related to the localization procedures. In terms of localization time and surgical time, there was no difference between groups. However, the down/wait time between localization and resection was significant (computed tomography = 189 minutes; electromagnetic navigational bronchoscopy = 27 minutes); this explains why the difference in total time (sum of localization, down, and surgery) was significant (P < 0.001). We found electromagnetic navigational bronchoscopy to be as safe and effective as computed tomography-guided wire placement and to provide a significantly decreased down time between localization and surgical resection.

  13. Opportunities in Education and Public Outreach for Scientists at the School of Ocean and Earth Sciences and Technology

    NASA Astrophysics Data System (ADS)

    Hicks, T.

    2004-12-01

    The School of Ocean and Earth Sciences and Technology (SOEST) at the University of Hawaii at Manoa is home to twelve diverse research institutes, programs and academic departments that focus on a wide range of earth and planetary sciences. SOEST's main outreach goals at the K-12 level are to increase the awareness of Hawaii's schoolchildren regarding earth, ocean, and space science, and to inspire them to consider a career in science. Education and public outreach efforts in SOEST include a variety of programs that engage students and the public in formal as well as informal educational settings, such as our biennial Open House, expedition web sites, Hawaii Ocean Science Bowl, museum exhibits, and programs with local schools. Some of the projects that allow for scientist involvement in E/PO include visiting local classrooms, volunteering in our outreach programs, submitting lessons and media files to our educational database of outreach materials relating to earth and space science research in Hawaii, developing E/PO materials to supplement research grants, and working with local museum staff as science experts.

  14. System, method and apparatus for conducting a phrase search

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.

  15. Practice Benchmarking in the Age of Targeted Auditing

    PubMed Central

    Langdale, Ryan P.; Holland, Ben F.

    2012-01-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847

  16. Practice benchmarking in the age of targeted auditing.

    PubMed

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  17. Data collection and compilation for a geodatabase of groundwater, surface-water, water-quality, geophysical, and geologic data, Pecos County Region, Texas, 1930-2011

    USGS Publications Warehouse

    Pearson, Daniel K.; Bumgarner, Johnathan R.; Houston, Natalie A.; Stanton, Gregory P.; Teeple, Andrew; Thomas, Jonathan V.

    2012-01-01

    The U.S. Geological Survey, in cooperation with Middle Pecos Groundwater Conservation District, Pecos County, City of Fort Stockton, Brewster County, and Pecos County Water Control and Improvement District No. 1, compiled groundwater, surface-water, water-quality, geophysical, and geologic data for site locations in the Pecos County region, Texas, and developed a geodatabase to facilitate use of this information. Data were compiled for an approximately 4,700 square mile area of the Pecos County region, Texas. The geodatabase contains data from 8,242 sampling locations; it was designed to organize and store field-collected geochemical and geophysical data, as well as digital database resources from the U.S. Geological Survey, Middle Pecos Groundwater Conservation District, Texas Water Development Board, Texas Commission on Environmental Quality,and numerous other State and local databases. The geodatabase combines these disparate database resources into a simple data model. Site locations are geospatially enabled and stored in a geodatabase feature class for cartographic visualization and spatial analysis within a Geographic Information System. The sampling locations are related to hydrogeologic information through the use of geodatabase relationship classes. The geodatabase relationship classes provide the ability to perform complex spatial and data-driven queries to explore data stored in the geodatabase.

  18. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  19. SU-G-TeP4-06: An Integrated Application for Radiation Therapy Treatment Plan Directives, Management, and Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuszak, M; Anderson, C; Lee, C

    Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less

  20. A data analysis expert system for large established distributed databases

    NASA Technical Reports Server (NTRS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  1. Identification of key genes related to high-risk gastrointestinal stromal tumors using bioinformatics analysis.

    PubMed

    Jin, Shuan; Zhu, Wenhua; Li, Jun

    2018-01-01

    The purpose of this study was to identify predictive biomarkers used for clinical therapy and prognostic evaluation of high-risk gastrointestinal stromal tumors (GISTs). In this study, microarray data GSE31802 were used to identify differentially expressed genes (DEGs) between high-risk GISTs and low-risk GISTs. Then, enrichment analysis of DEGs was conducted based on the gene ontology and kyoto encyclopedia of genes and genomes pathway database. In addition, the transcription factors and cancer-related genes in DEGs were screened according to the TRANSFAC, TSGene, and TAG database. Finally, protein-protein interaction (PPI) network was constructed and analyzed to look for critical genes involved in high-risk GISTs. A total of forty DEGs were obtained and these genes were mainly involved in four pathways, including melanogenesis, neuroactive ligand-receptor interaction, malaria, and hematopoietic cell lineage. The enriched biological processes were related to the regulation of insulin secretion, integrin activation, and neuropeptide signaling pathway. Transcription factor analysis of DEGs indicated that POU domain, class 2, associating factor 1 (POU2AF1) was significantly downregulated in high-risk GISTs. By constructing the PPI network of DEGs, ten genes with high degrees formed local networks, such as PNOC, P2RY14, and SELP. Four genes as POU2AF1, PNOC, P2RY14, and SELP might be used as biomarkers for prognosis of high-risk GISTs.

  2. Component, Context and Manufacturing Model Library (C2M2L)

    DTIC Science & Technology

    2013-03-01

    Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database

  3. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  4. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  5. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  6. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  7. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  8. Relational Databases: A Transparent Framework for Encouraging Biology Students to Think Informatically

    ERIC Educational Resources Information Center

    Rice, Michael; Gladstone, William; Weir, Michael

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…

  9. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  10. EasyKSORD: A Platform of Keyword Search Over Relational Databases

    NASA Astrophysics Data System (ADS)

    Peng, Zhaohui; Li, Jing; Wang, Shan

    Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.

  11. “NaKnowBase”: A Nanomaterials Relational Database

    EPA Science Inventory

    NaKnowBase is an internal relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations...

  12. A Quantitative Analysis of the Extrinsic and Intrinsic Turnover Factors of Relational Database Support Professionals

    ERIC Educational Resources Information Center

    Takusi, Gabriel Samuto

    2010-01-01

    This quantitative analysis explored the intrinsic and extrinsic turnover factors of relational database support specialists. Two hundred and nine relational database support specialists were surveyed for this research. The research was conducted based on Hackman and Oldham's (1980) Job Diagnostic Survey. Regression analysis and a univariate ANOVA…

  13. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  14. Considerations of Online Numeric Databases for Social Science Research,

    DTIC Science & Technology

    1983-09-01

    online user groups profit from them has greatly increased the size of the online market . International Resource Development says the revenues of...information services. Carlos Cuadra, however, feels that the customizers have been beneficial to the online market by educating users at a local level...calculations. Online data can sometimes assume a spurious authority due to the medium itself. "The market for numeric databases and systems is still

  15. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  16. Cardiac registers: the adult cardiac surgery register.

    PubMed

    Bridgewater, Ben

    2010-09-01

    AIMS OF THE SCTS ADULT CARDIAC SURGERY DATABASE: To measure the quality of care of adult cardiac surgery in GB and Ireland and provide information for quality improvement and research. Feedback of structured data to hospitals, publication of named hospital and surgeon mortality data, publication of benchmarked activity and risk adjusted clinical outcomes through intermittent comprehensive database reports, annual screening of all hospital and individual surgeon risk adjusted mortality rates by the professional society. All NHS hospitals in England, Scotland and Wales with input from some private providers and hospitals in Ireland. 1994-ongoing. Consecutive patients, unconsented. Current number of records: 400000. Adult cardiac surgery operations excluding cardiac transplantation and ventricular assist devices. 129 fields covering demographic factors, pre-operative risk factors, operative details and post-operative in-hospital outcomes. Entry onto local software systems by direct key board entry or subsequent transcription from paper records, with subsequent electronic upload to the central cardiac audit database. Non-financial incentives at hospital level. Local validation processes exist in the hospitals. There is currently no external data validation process. All cause mortality is obtained through linkage with Office for National Statistics. No other linkages exist at present. Available for research and audit by application to the SCTS database committee at http://www.scts.org.

  17. Face antispoofing based on frame difference and multilevel representation

    NASA Astrophysics Data System (ADS)

    Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad

    2017-07-01

    Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.

  18. Impacts of European drought events: insights from an international database of text-based reports

    NASA Astrophysics Data System (ADS)

    Stahl, K.; Kohn, I.; Blauhut, V.; Urquijo, J.; De Stefano, L.; Acacio, V.; Dias, S.; Stagge, J. H.; Tallaksen, L. M.; Kampragou, E.; Van Loon, A. F.; Barker, L. J.; Melsen, L. A.; Bifulco, C.; Musolino, D.; de Carli, A.; Massarutto, A.; Assimacopoulos, D.; Van Lanen, H. A. J.

    2015-09-01

    Drought is a natural hazard that can cause a wide range of impacts affecting the environment, society, and the economy. Assessing and reducing vulnerability to these impacts for regions beyond the local scale, spanning political and sectoral boundaries, requires systematic and detailed data regarding impacts. This study presents an assessment of the diversity of drought impacts across Europe based on the European Drought Impact report Inventory (EDII), a unique research database that has collected close to 5000 impact reports from 33 European countries. The reported drought impacts were classified into major impact categories, each of which had a number of subtypes. The distribution of these categories and types was then analyzed over time, by country, across Europe and for particular drought events. The results show that impacts on agriculture and public water supply dominate the collection of drought impact reports for most countries and for all major drought events since the 1970s, while the number and relative fractions of reported impacts in other sectors can vary regionally and from event to event. The data also shows that reported impacts have increased over time as more media and website information has become available and environmental awareness has increased. Even though the distribution of impact categories is relatively consistent across Europe, the details of the reports show some differences. They confirm severe impacts in southern regions (particularly on agriculture and public water supply) and sector-specific impacts in central and northern regions (e.g. on forestry or energy production). As a text-based database, the EDII presents a new challenge for quantitative analysis; however, the EDII provides a new and more comprehensive view on drought impacts. Related studies have already developed statistical techniques to evaluate the link between drought indices and impacts using the EDII. The EDII is a living database and is a promising source for further research on drought impacts, vulnerabilities, and risks across Europe. A key result is the extensive variety of impacts found across Europe and its documentation. This data coverage may help drought policy planning at national to international levels.

  19. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    PubMed

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  20. The biogeochemical distribution of trace elements in the Indian Ocean

    NASA Astrophysics Data System (ADS)

    Saager, Paul M.

    1994-06-01

    The present review deals with the distributions of dissolved trace metals in the Indian Ocean in relation with biological, chemical and hydrographic processes. The literature data-base is extremely limited and almost no information is available on particle processes and input and output processes of trace metals in the Indian Ocean basin and therefore much research is needed to expand our understanding of the marine chemistries of most trace metals. An area of special interest for future research is the Arabian Sea. The local conditions (upwelling induced productivity, restricted bottom water circulation and suboxic intermediate waters) create a natural laboratory for studying trace metal chemistry.

  1. Iliac Crest Bone Graft versus Local Autograft or Allograft for Lumbar Spinal Fusion: A Systematic Review.

    PubMed

    Tuchman, Alexander; Brodke, Darrel S; Youssef, Jim A; Meisel, Hans-Jörg; Dettori, Joseph R; Park, Jong-Beom; Yoon, S Tim; Wang, Jeffrey C

    2016-09-01

    Systematic review. To compare the effectiveness and safety between iliac crest bone graft (ICBG) and local autologous bone and allograft in the lumbar spine. A systematic search of multiple major medical reference databases identified studies evaluating spinal fusion in patients with degenerative joint disease using ICBG, local autograft, or allograft in the thoracolumbar spine. Six comparative studies met our inclusion criteria. A "low" strength of the overall body of evidence suggested no difference in fusion percentages in the lumbar spine between local autograft and ICBG. We found no difference in fusion percentages based on low evidence comparing allograft with ICBG autograft. There were no differences in pain or functional results comparing local autograft or allograft with ICBG autograft. Donor site pain and hematoma/seroma occurred more frequently in ICBG autograft group for lumbar fusion procedures. There was low evidence around the estimate of patients with donor site pain following ICBG harvesting, ranging from 16.7 to 20%. With respect to revision, low evidence demonstrated no difference between allograft and ICBG autograft. There was no evidence comparing patients receiving allograft with local autograft for fusion, pain, functional, and safety outcomes. In the lumbar spine, ICBG, local autograft, and allograft have similar effectiveness in terms of fusion rates, pain scores, and functional outcomes. However, ICBG is associated with an increased risk for donor site-related complications. Significant limitations exist in the available literature when comparing ICBG, local autograft, and allograft for lumbar fusion, and thus ICBG versus other fusion methods necessitates further investigation.

  2. Protein Folding and Structure Prediction from the Ground Up: The Atomistic Associative Memory, Water Mediated, Structure and Energy Model.

    PubMed

    Chen, Mingchen; Lin, Xingcheng; Zheng, Weihua; Onuchic, José N; Wolynes, Peter G

    2016-08-25

    The associative memory, water mediated, structure and energy model (AWSEM) is a coarse-grained force field with transferable tertiary interactions that incorporates local in sequence energetic biases using bioinformatically derived structural information about peptide fragments with locally similar sequences that we call memories. The memory information from the protein data bank (PDB) database guides proper protein folding. The structural information about available sequences in the database varies in quality and can sometimes lead to frustrated free energy landscapes locally. One way out of this difficulty is to construct the input fragment memory information from all-atom simulations of portions of the complete polypeptide chain. In this paper, we investigate this approach first put forward by Kwac and Wolynes in a more complete way by studying the structure prediction capabilities of this approach for six α-helical proteins. This scheme which we call the atomistic associative memory, water mediated, structure and energy model (AAWSEM) amounts to an ab initio protein structure prediction method that starts from the ground up without using bioinformatic input. The free energy profiles from AAWSEM show that atomistic fragment memories are sufficient to guide the correct folding when tertiary forces are included. AAWSEM combines the efficiency of coarse-grained simulations on the full protein level with the local structural accuracy achievable from all-atom simulations of only parts of a large protein. The results suggest that a hybrid use of atomistic fragment memory and database memory in structural predictions may well be optimal for many practical applications.

  3. Modeling, Simulation, and Characterization of Distributed Multi-Agent Systems

    DTIC Science & Technology

    2012-01-01

    capabilities (vision, LIDAR , differential global positioning, ultrasonic proximity sensing, etc.), the agents comprising a MAS tend to have somewhat lesser...on the simultaneous localization and mapping ( SLAM ) problem [19]. SLAM acknowledges that externally-provided localization information is not...continually-updated mapping databases, generates a comprehensive representation of the spatial and spectral environment. Many times though, inherent SLAM

  4. The national database of wildfire mitigation programs: state, county and local efforts reduce wildfire risk

    Treesearch

    Terry Haines; Cheryl Renner; Margaret Reams; James Granskog

    2005-01-01

    The growth of residential communities within forested areas has increased the danger to life and property from uncontrolled wildfire. In response, states, counties and local governments in the United States have dramatically increased their wildfire mitigation efforts. Policymakers and fire officials are employing a wide range of regulatory and voluntary wildfire risk...

  5. Italian Present-day Stress Indicators: IPSI Database

    NASA Astrophysics Data System (ADS)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  6. OGRO: The Overview of functionally characterized Genes in Rice online database.

    PubMed

    Yamamoto, Eiji; Yonemaru, Jun-Ichi; Yamamoto, Toshio; Yano, Masahiro

    2012-12-01

    The high-quality sequence information and rich bioinformatics tools available for rice have contributed to remarkable advances in functional genomics. To facilitate the application of gene function information to the study of natural variation in rice, we comprehensively searched for articles related to rice functional genomics and extracted information on functionally characterized genes. As of 31 March 2012, 702 functionally characterized genes were annotated. This number represents about 1.6% of the predicted loci in the Rice Annotation Project Database. The compiled gene information is organized to facilitate direct comparisons with quantitative trait locus (QTL) information in the Q-TARO database. Comparison of genomic locations between functionally characterized genes and the QTLs revealed that QTL clusters were often co-localized with high-density gene regions, and that the genes associated with the QTLs in these clusters were different genes, suggesting that these QTL clusters are likely to be explained by tightly linked but distinct genes. Information on the functionally characterized genes compiled during this study is now available in the O verview of Functionally Characterized G enes in R ice O nline database (OGRO) on the Q-TARO website ( http://qtaro.abr.affrc.go.jp/ogro ). The database has two interfaces: a table containing gene information, and a genome viewer that allows users to compare the locations of QTLs and functionally characterized genes. OGRO on Q-TARO will facilitate a candidate-gene approach to identifying the genes responsible for QTLs. Because the QTL descriptions in Q-TARO contain information on agronomic traits, such comparisons will also facilitate the annotation of functionally characterized genes in terms of their effects on traits important for rice breeding. The increasing amount of information on rice gene function being generated from mutant panels and other types of studies will make the OGRO database even more valuable in the future.

  7. Data base management system for lymphatic filariasis--a neglected tropical disease.

    PubMed

    Upadhyayula, Suryanaryana Murty; Mutheneni, Srinivasa Rao; Kadiri, Madhusudhan Rao; Kumaraswamy, Sriram; Nelaturu, Sarat Chandra Babu

    2012-01-01

    Researchers working in the area of Public Health are being confronted with large volumes of data on various aspects of entomology and epidemiology. To obtain the relevant information out of these data requires particular database management system. In this paper, we have described about the usages of our developed database on lymphatic filariasis. This database application is developed using Model View Controller (MVC) architecture, with MySQL as database and a web based interface. We have collected and incorporated the data on filariasis in the database from Karimnagar, Chittoor, East and West Godavari districts of Andhra Pradesh, India. The importance of this database is to store the collected data, retrieve the information and produce various combinational reports on filarial aspects which in turn will help the public health officials to understand the burden of disease in a particular locality. This information is likely to have an imperative role on decision making for effective control of filarial disease and integrated vector management operations.

  8. “NaKnowBase”: A Nanomaterials Relational Database

    EPA Science Inventory

    NaKnowBase is a relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations and poten...

  9. Patient Race and Outcome Preferences as Predictors of Urologists’ Treatment Recommendations and Referral Patterns in Early-Stage Prostate Cancer

    DTIC Science & Technology

    2005-11-01

    care for localized prostate cancer. To date, we have completed all survey mailings, collected responses, entered these into an Access database, and...vignette, patient socioeconomic status, not race, influenced treatment recommendations for localized prostate cancer. A majority of urologists rate their...in patterns of care for localized prostate cancer. See Introduction (page 14) and Methods (pages 15-17) in Appendix B for details. Key research

  10. Stage-independent, single lead EEG sleep spindle detection using the continuous wavelet transform and local weighted smoothing.

    PubMed

    Tsanas, Athanasios; Clifford, Gari D

    2015-01-01

    Sleep spindles are critical in characterizing sleep and have been associated with cognitive function and pathophysiological assessment. Typically, their detection relies on the subjective and time-consuming visual examination of electroencephalogram (EEG) signal(s) by experts, and has led to large inter-rater variability as a result of poor definition of sleep spindle characteristics. Hitherto, many algorithmic spindle detectors inherently make signal stationarity assumptions (e.g., Fourier transform-based approaches) which are inappropriate for EEG signals, and frequently rely on additional information which may not be readily available in many practical settings (e.g., more than one EEG channels, or prior hypnogram assessment). This study proposes a novel signal processing methodology relying solely on a single EEG channel, and provides objective, accurate means toward probabilistically assessing the presence of sleep spindles in EEG signals. We use the intuitively appealing continuous wavelet transform (CWT) with a Morlet basis function, identifying regions of interest where the power of the CWT coefficients corresponding to the frequencies of spindles (11-16 Hz) is large. The potential for assessing the signal segment as a spindle is refined using local weighted smoothing techniques. We evaluate our findings on two databases: the MASS database comprising 19 healthy controls and the DREAMS sleep spindle database comprising eight participants diagnosed with various sleep pathologies. We demonstrate that we can replicate the experts' sleep spindles assessment accurately in both databases (MASS database: sensitivity: 84%, specificity: 90%, false discovery rate 83%, DREAMS database: sensitivity: 76%, specificity: 92%, false discovery rate: 67%), outperforming six competing automatic sleep spindle detection algorithms in terms of correctly replicating the experts' assessment of detected spindles.

  11. Integration of environmental simulation models with satellite remote sensing and geographic information systems technologies: case studies

    USGS Publications Warehouse

    Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.

    1993-01-01

    Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.

  12. Saudi anti-human cancer plants database (SACPD): A collection of plants with anti-human cancer activities

    PubMed Central

    Al-Zahrani, Ateeq Ahmed

    2018-01-01

    Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd PMID:29774137

  13. Saudi anti-human cancer plants database (SACPD): A collection of plants with anti-human cancer activities.

    PubMed

    Al-Zahrani, Ateeq Ahmed

    2018-01-30

    Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd.

  14. TOPDOM: database of conservatively located domains and motifs in proteins.

    PubMed

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  16. The Moroccan Genetic Disease Database (MGDD): a database for DNA variations related to inherited disorders and disease susceptibility.

    PubMed

    Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid

    2014-03-01

    National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.

  17. Ridge 2000 Data Management System

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.

    2005-12-01

    Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.

  18. Fuzzy queries above relational database

    NASA Astrophysics Data System (ADS)

    Smolka, Pavel; Bradac, Vladimir

    2017-11-01

    The aim of the theme is to introduce a possibility of fuzzy queries implemented in relational databases. The issue is described on a model which identifies the appropriate part of the problem domain for fuzzy approach. The model is demonstrated on a database of wines focused on searching in it. The construction of the database complies with the Law of the Czech Republic.

  19. An embedded face-classification system for infrared images on an FPGA

    NASA Astrophysics Data System (ADS)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  20. An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.

    PubMed

    K, Manasa; Channappayya, Sumohana S

    2016-06-01

    We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.

  1. Cognitive Challenges

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  2. Melanoma of the Skin in the Danish Cancer Registry and the Danish Melanoma Database: A Validation Study.

    PubMed

    Pedersen, Sidsel Arnspang; Schmidt, Sigrun Alba Johannesdottir; Klausen, Siri; Pottegård, Anton; Friis, Søren; Hölmich, Lisbet Rosenkrantz; Gaist, David

    2018-05-01

    The nationwide Danish Cancer Registry and the Danish Melanoma Database both record data on melanoma for purposes of monitoring, quality assurance, and research. However, the data quality of the Cancer Registry and the Melanoma Database has not been formally evaluated. We estimated the positive predictive value (PPV) of melanoma diagnosis for random samples of 200 patients from the Cancer Registry (n = 200) and the Melanoma Database (n = 200) during 2004-2014, using the Danish Pathology Registry as "gold standard" reference. We further validated tumor characteristics in the Cancer Registry and the Melanoma Database. Additionally, we estimated the PPV of in situ melanoma diagnoses in the Melanoma Database, and the sensitivity of melanoma diagnoses in 2004-2014. The PPVs of melanoma in the Cancer Registry and the Melanoma Database were 97% (95% CI = 94, 99) and 100%. The sensitivity was 90% in the Cancer Registry and 77% in the Melanoma Database. The PPV of in situ melanomas in the Melanoma Database was 97% and the sensitivity was 56%. In the Melanoma Database, we observed PPVs of ulceration of 75% and Breslow thickness of 96%. The PPV of histologic subtypes varied between 87% and 100% in the Cancer Registry and 93% and 100% in the Melanoma Database. The PPVs for anatomical localization were 83%-95% in the Cancer Registry and 93%-100% in the Melanoma Database. The data quality in both the Cancer Registry and the Melanoma Database is high, supporting their use in epidemiologic studies.

  3. A Vision System For A Mars Rover

    NASA Astrophysics Data System (ADS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1987-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  4. A vision system for a Mars rover

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  5. A concept for routine emergency-care data-based syndromic surveillance in Europe.

    PubMed

    Ziemann, A; Rosenkötter, N; Garcia-Castrillo Riesgo, L; Schrell, S; Kauhl, B; Vergeiner, G; Fischer, M; Lippert, F K; Krämer, A; Brand, H; Krafft, T

    2014-11-01

    We developed a syndromic surveillance (SyS) concept using emergency dispatch, ambulance and emergency-department data from different European countries. Based on an inventory of sub-national emergency data availability in 12 countries, we propose framework definitions for specific syndromes and a SyS system design. We tested the concept by retrospectively applying cumulative sum and spatio-temporal cluster analyses for the detection of local gastrointestinal outbreaks in four countries and comparing the results with notifiable disease reporting. Routine emergency data was available daily and electronically in 11 regions, following a common structure. We identified two gastrointestinal outbreaks in two countries; one was confirmed as a norovirus outbreak. We detected 1/147 notified outbreaks. Emergency-care data-based SyS can supplement local surveillance with near real-time information on gastrointestinal patients, especially in special circumstances, e.g. foreign tourists. It most likely cannot detect the majority of local gastrointestinal outbreaks with few, mild or dispersed cases.

  6. Food Composition Database Format and Structure: A User Focused Approach

    PubMed Central

    Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836

  7. Issues with medication supply and management in a rural community in Queensland.

    PubMed

    Tan, Amy C W; Emmerton, Lynne M; Hattingh, H Laetitia

    2012-06-01

    To identify the key issues reported by rural health-care providers in their provision of medication supply and related cognitive services, and in order to advise health workforce and role development and thus improve the quality use of medicines in rural communities. Exploratory semistructured interview research. A rural community comprising four towns in a rural health service district in Queensland, Australia.   Forty-nine health-care providers (medical practitioners, pharmacists, nurses and others) with medication-related roles who serviced the study community, identified through databases and local contacts. Medication-related roles undertaken by the health-care providers, focusing on medication supply and cognitive services; challenges in undertaking these roles. Medical and nursing providers reported challenges in ensuring continuity in supply of medications due to their existing medical workload demands. Local pharmacists were largely involved in medication supply, with limited capacity for extended cognitive roles. Participants identified a lack of support for their medication roles and the potential value of clinically focused pharmacists in medication management services. Medication supply may become more efficient with extended roles for certain health-care providers. The need for cognitive medication management services suggests potential for clinical pharmacists' role development in rural areas. © 2012 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  8. Local descriptive body weight and dietary norms, food availability, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort.

    PubMed

    Carroll, Suzanne J; Paquet, Catherine; Howard, Natasha J; Coffee, Neil T; Adams, Robert J; Taylor, Anne W; Niyonsenga, Theo; Daniel, Mark

    2017-02-02

    Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA 1c ), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA 1c were moderated by food resource availability. HbA 1c , representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m 2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA 1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA 1c . Neither fast-food nor healthful food resource availability were associated with change in HbA 1c . Greater healthful food resource availability reduced the rate of increase in HbA 1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA 1c . Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA 1c . Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes.

  9. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  10. PIGD: a database for intronless genes in the Poaceae.

    PubMed

    Yan, Hanwei; Jiang, Cuiping; Li, Xiaoyu; Sheng, Lei; Dong, Qing; Peng, Xiaojian; Li, Qian; Zhao, Yang; Jiang, Haiyang; Cheng, Beijiu

    2014-10-01

    Intronless genes are a feature of prokaryotes; however, they are widespread and unequally distributed among eukaryotes and represent an important resource to study the evolution of gene architecture. Although many databases on exons and introns exist, there is currently no cohesive database that collects intronless genes in plants into a single database. In this study, we present the Poaceae Intronless Genes Database (PIGD), a user-friendly web interface to explore information on intronless genes from different plants. Five Poaceae species, Sorghum bicolor, Zea mays, Setaria italica, Panicum virgatum and Brachypodium distachyon, are included in the current release of PIGD. Gene annotations and sequence data were collected and integrated from different databases. The primary focus of this study was to provide gene descriptions and gene product records. In addition, functional annotations, subcellular localization prediction and taxonomic distribution are reported. PIGD allows users to readily browse, search and download data. BLAST and comparative analyses are also provided through this online database, which is available at http://pigd.ahau.edu.cn/. PIGD provides a solid platform for the collection, integration and analysis of intronless genes in the Poaceae. As such, this database will be useful for subsequent bio-computational analysis in comparative genomics and evolutionary studies.

  11. Integrated Space Asset Management Database and Modeling

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust environment that can be extended and expanded indefinitely.

  12. Measurement of the Local Food Environment: A Comparison of Existing Data Sources

    PubMed Central

    Bader, Michael D. M.; Ailshire, Jennifer A.; Morenoff, Jeffrey D.; House, James S.

    2010-01-01

    Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses—drugstores, liquor stores, bars, convenience stores, restaurants, and grocers—located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design. PMID:20123688

  13. Lungs in TSC

    MedlinePlus

    ... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...

  14. A Guided Tour of Saada

    NASA Astrophysics Data System (ADS)

    Michel, L.; Motch, C.; Nguyen Ngoc, H.; Pineau, F. X.

    2009-09-01

    Saada (http://amwdb.u-strasbg.fr/saada) is a tool for helping astronomers build local archives without writing any code (Michel et al. 2004). Databases created by Saada can host collections of heterogeneous data files. These data collections can also be published in the VO. An overview of the main Saada features is presented in this demo: creation of a basic database, creation of relationships, data searches using SaadaQL, metadata tagging, and use of VO services.

  15. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 Catchments (Version 2.1) for the Conterminous United States: 2010 US Census Housing Unit and Population Density

    EPA Pesticide Factsheets

    This dataset represents the population and housing unit density within individual, local NHDPlusV2 catchments and upstream, contributing watersheds based on 2010 US Census data. Densities are calculated for every block group and watershed averages are calculated for every local NHDPlusV2 catchment(see Data Sources for links to NHDPlusV2 data and Census Data). This data set is derived from The TIGER/Line Files and related database (.dbf) files for the conterminous USA. It was downloaded as Block Group-Level Census 2010 SF1 Data in File Geodatabase Format (ArcGIS version 10.0). The landscape raster (LR) was produced based on the data compiled from the questions asked of all people and about every housing unit. The (block-group population / block group area) and (block-group housing units / block group area) were summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description). Using a riparian buffer(see Process Steps), statistics were generated for areas within each catchment that are within 100 meters of the stream reach in an attempt to evaluate for the riparian zone.

  16. A spatial analysis of the association between restaurant density and body mass index in Canadian adults.

    PubMed

    Hollands, Simon; Campbell, M Karen; Gilliland, Jason; Sarma, Sisira

    2013-10-01

    To investigate the association between fast-food restaurant density and adult body mass index (BMI) in Canada. Individual-level BMI and confounding variables were obtained from the 2007-2008 Canadian Community Health Survey master file. Locations of the fast-food and full-service chain restaurants and other non-chain restaurants were obtained from the 2008 Infogroup Canada business database. Food outlet density (fast-food, full-service and other) per 10,000 population was calculated for each Forward Sortation Area (FSA). Global (Moran's I) and local indicators of spatial autocorrelation of BMI were assessed. Ordinary least squares (OLS) and spatial auto-regressive error (SARE) methods were used to assess the association between local food environment and adult BMI in Canada. Global and local spatial autocorrelation of BMI were found in our univariate analysis. We found that OLS and SARE estimates were very similar in our multivariate models. An additional fast-food restaurant per 10,000 people at the FSA-level is associated with a 0.022kg/m(2) increase in BMI. On the other hand, other restaurant density is negatively related to BMI. Fast-food restaurant density is positively associated with BMI in Canada. Results suggest that restricting availability of fast-food in local neighborhoods may play a role in obesity prevention. © 2013.

  17. The HyperLeda project en route to the astronomical virtual observatory

    NASA Astrophysics Data System (ADS)

    Golev, V.; Georgiev, V.; Prugniel, Ph.

    2002-07-01

    HyperLeda (Hyper-Linked Extragalactic Databases and Archives) is aimed to study the evolution of galaxies, their kinematics and stellar populations and the structure of Local Universe. HyperLeda is involved in catalogue and software production, data-mining and massive data processing. The products are serviced to the community through web mirrors. The development of HyperLeda is distributed between different sites and is based on the background experience of the LEDA and Hypercat databases. The HyperLeda project is focused both on the European iAstro colaboration and as a unique database for studies of the physics of the extragalactic objects.

  18. Levelling and merging of two discrete national-scale geochemical databases: A case study showing the surficial expression of metalliferous black shales

    USGS Publications Warehouse

    Smith, Steven M.; Neilson, Ryan T.; Giles, Stuart A.

    2015-01-01

    Government-sponsored, national-scale, soil and sediment geochemical databases are used to estimate regional and local background concentrations for environmental issues, identify possible anthropogenic contamination, estimate mineral endowment, explore for new mineral deposits, evaluate nutrient levels for agriculture, and establish concentration relationships with human or animal health. Because of these different uses, it is difficult for any single database to accommodate all the needs of each client. Smith et al. (2013, p. 168) reviewed six national-scale soil and sediment geochemical databases for the United States (U.S.) and, for each, evaluated “its appropriateness as a national-scale geochemical database and its usefulness for national-scale geochemical mapping.” Each of the evaluated databases has strengths and weaknesses that were listed in that review.Two of these U.S. national-scale geochemical databases are similar in their sample media and collection protocols but have different strengths—primarily sampling density and analytical consistency. This project was implemented to determine whether those databases could be merged to produce a combined dataset that could be used for mineral resource assessments. The utility of the merged database was tested to see whether mapped distributions could identify metalliferous black shales at a national scale.

  19. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  20. MetNetAPI: A flexible method to access and manipulate biological network data from MetNet

    PubMed Central

    2010-01-01

    Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943

Top