Sample records for incomplete database information

  1. Research on computer virus database management system

    NASA Astrophysics Data System (ADS)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  2. Successful Keyword Searching: Initiating Research on Popular Topics Using Electronic Databases.

    ERIC Educational Resources Information Center

    MacDonald, Randall M.; MacDonald, Susan Priest

    Students are using electronic resources more than ever before to locate information for assignments. Without the proper search terms, results are incomplete, and students are frustrated. Using the keywords, key people, organizations, and Web sites provided in this book and compiled from the most commonly used databases, students will be able to…

  3. An Expert System for Identification of Minerals in Thin Section.

    ERIC Educational Resources Information Center

    Donahoe, James Louis; And Others

    1989-01-01

    Discusses a computer database which includes optical properties of 142 minerals. Uses fuzzy logic to identify minerals from incomplete and imprecise information. Written in Turbo PASCAL for MS-DOS with 128K. (MVL)

  4. Validated MicroRNA Target Databases: An Evaluation.

    PubMed

    Lee, Yun Ji Diana; Kim, Veronica; Muth, Dillon C; Witwer, Kenneth W

    2015-11-01

    Preclinical Research Positive findings from preclinical and clinical studies involving depletion or supplementation of microRNA (miRNA) engender optimism about miRNA-based therapeutics. However, off-target effects must be considered. Predicting these effects is complicated. Each miRNA may target many gene transcripts, and the rules governing imperfectly complementary miRNA: target interactions are incompletely understood. Several databases provide lists of the relatively small number of experimentally confirmed miRNA: target pairs. Although incomplete, this information might allow assessment of at least some of the off-target effects. We evaluated the performance of four databases of experimentally validated miRNA: target interactions (miRWalk 2.0, miRTarBase, miRecords, and TarBase 7.0) using a list of 50 alphabetically consecutive genes. We examined the provided citations to determine the degree to which each interaction was experimentally supported. To assess stability, we tested at the beginning and end of a five-month period. Results varied widely by database. Two of the databases changed significantly over the course of 5 months. Most reported evidence for miRNA: target interactions were indirect or otherwise weak, and relatively few interactions were supported by more than one publication. Some returned results appear to arise from simplistic text searches that offer no insight into the relationship of the search terms, may not even include the reported gene or miRNA, and may thus, be invalid. We conclude that validation databases provide important information, but not all information in all extant databases is up-to-date or accurate. Nevertheless, the more comprehensive validation databases may provide useful starting points for investigation of off-target effects of proposed small RNA therapies. © 2015 Wiley Periodicals, Inc.

  5. Database development of land use characteristics along major U.S. highways

    DOT National Transportation Integrated Search

    2000-06-01

    Information about land use by and adjacent to transportation systems is essential to : understanding the environmental impacts of transportation systems. Nevertheless, such data : are presently sparse and incomplete, especially at the national scale....

  6. A statistical approach to identify, monitor, and manage incomplete curated data sets.

    PubMed

    Howe, Douglas G

    2018-04-02

    Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to identify data sets that are incompletely curated, as demonstrated using the gene expression data set from ZFIN. This information can help both database resources and data consumers gauge when it may be useful to look further for published data to augment the existing expertly curated information.

  7. 78 FR 41190 - Notice of Request for Clearance of a new Information Collection: National Census of Ferry Operators

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... to produce a descriptive database of existing ferry operations. Recently enacted MAP-21 legislation... Administration (FHWA) Office of Intermodal and Statewide Planning conducted a survey of approximately 250 ferry... designed to target ridership and terminal information that typically produce unreliable and/or incomplete...

  8. The role of insurance claims databases in drug therapy outcomes research.

    PubMed

    Lewis, N J; Patwell, J T; Briesacher, B A

    1993-11-01

    The use of insurance claims databases in drug therapy outcomes research holds great promise as a cost-effective alternative to post-marketing clinical trials. Claims databases uniquely capture information about episodes of care across healthcare services and settings. They also facilitate the examination of drug therapy effects on cohorts of patients and specific patient subpopulations. However, there are limitations to the use of insurance claims databases including incomplete diagnostic and provider identification data. The characteristics of the population included in the insurance plan, the plan benefit design, and the variables of the database itself can influence the research results. Given the current concerns regarding the completeness of insurance claims databases, and the validity of their data, outcomes research usually requires original data to validate claims data or to obtain additional information. Improvements to claims databases such as standardisation of claims information reporting, addition of pertinent clinical and economic variables, and inclusion of information relative to patient severity of illness, quality of life, and satisfaction with provided care will enhance the benefit of such databases for outcomes research.

  9. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338

  10. CHEMICAL STORAGE: MYTHS VERSUS REALITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, F

    A large number of resources explaining proper chemical storage are available. These resources include books, databases/tables, and articles that explain various aspects of chemical storage including compatible chemical storage, signage, and regulatory requirements. Another source is the chemical manufacturer or distributor who provides storage information in the form of icons or color coding schemes on container labels. Despite the availability of these resources, chemical accidents stemming from improper storage, according to recent reports (1) (2), make up almost 25% of all chemical accidents. This relatively high percentage of chemical storage accidents suggests that these publications and color coding schemes althoughmore » helpful, still provide incomplete information that may not completely mitigate storage risks. This manuscript will explore some ways published storage information may be incomplete, examine the associated risks, and suggest methods to help further eliminate chemical storage risks.« less

  11. ARTI refrigerant database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some maymore » be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.« less

  12. Construction and validation of a population-based bone densitometry database.

    PubMed

    Leslie, William D; Caetano, Patricia A; Macwilliam, Leonard R; Finlayson, Gregory S

    2005-01-01

    Utilization of dual-energy X-ray absorptiometry (DXA) for the initial diagnostic assessment of osteoporosis and in monitoring treatment has risen dramatically in recent years. Population-based studies of the impact of DXA and osteoporosis remain challenging because of incomplete and fragmented test data that exist in most regions. Our aim was to create and assess completeness of a database of all clinical DXA services and test results for the province of Manitoba, Canada and to present descriptive data resulting from testing. A regionally based bone density program for the province of Manitoba, Canada was established in 1997. Subsequent DXA services were prospectively captured in a program database. This database was retrospectively populated with earlier DXA results dating back to 1990 (the year that the first DXA scanner was installed) by integrating multiple data sources. A random chart audit was performed to assess completeness and accuracy of this dataset. For comparison, testing rates determined from the DXA database were compared with physician administrative claims data. There was a high level of completeness of this database (>99%) and accurate personal identifier information sufficient for linkage with other health care administrative data (>99%). This contrasted with physician billing data that were found to be markedly incomplete. Descriptive data provide a profile of individuals receiving DXA and their test results. In conclusion, the Manitoba bone density database has great potential as a resource for clinical and health policy research because it is population based with a high level of completeness and accuracy.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  14. Case retrieval in medical databases by fusing heterogeneous information.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  15. 49 CFR 630.6 - Late and incomplete reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 7 2013-10-01 2013-10-01 false Late and incomplete reports. 630.6 Section 630.6 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.6 Late and incomplete reports. (a) Late reports...

  16. 49 CFR 630.6 - Late and incomplete reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Late and incomplete reports. 630.6 Section 630.6 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.6 Late and incomplete reports. (a) Late reports...

  17. 49 CFR 630.6 - Late and incomplete reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 7 2011-10-01 2011-10-01 false Late and incomplete reports. 630.6 Section 630.6 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.6 Late and incomplete reports. (a) Late reports...

  18. 49 CFR 630.6 - Late and incomplete reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 7 2014-10-01 2014-10-01 false Late and incomplete reports. 630.6 Section 630.6 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.6 Late and incomplete reports. (a) Late reports...

  19. 49 CFR 630.6 - Late and incomplete reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 7 2012-10-01 2012-10-01 false Late and incomplete reports. 630.6 Section 630.6 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.6 Late and incomplete reports. (a) Late reports...

  20. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGES

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  1. Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection.

    PubMed

    Doyle, Mathew P; Li, Annette; Villanueva, Claudia I; Peeceeyen, Sheen C S; Cooper, Michael G; Hanel, Kevin C; Fermanis, Gary G; Robertson, Greg

    2015-01-01

    Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE.

  2. Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection

    PubMed Central

    Doyle, Mathew P.; Li, Annette; Villanueva, Claudia I.; Peeceeyen, Sheen C. S.; Cooper, Michael G.; Hanel, Kevin C.; Fermanis, Gary G.; Robertson, Greg

    2015-01-01

    Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE. PMID:26783463

  3. Empirical study of fuzzy compatibility measures and aggregation operators

    NASA Astrophysics Data System (ADS)

    Cross, Valerie V.; Sudkamp, Thomas A.

    1992-02-01

    Two fundamental requirements for the generation of support using incomplete and imprecise information are the ability to measure the compatibility of discriminatory information with domain knowledge and the ability to fuse information obtained from disparate sources. A generic architecture utilizing the generalized fuzzy relational database model has been developed to empirically investigate the support generation capabilities of various compatibility measures and aggregation operators. This paper examines the effectiveness of combinations of compatibility measures from the set-theoretic, geometric distance, and logic- based classes paired with t-norm and generalized mean families of aggregation operators.

  4. ARTI Refrigerant Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, J.M.

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants includingmore » alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.« less

  5. Search for biological specimens from midwestern parks: pitfalls and solutions

    USGS Publications Warehouse

    Bennett, J.P.

    2001-01-01

    This paper describes the results of searches of herbarium and museum collections and databases for records of vertebrate and vascular plant specimens that had been collected in 15 midwestern National Park System units. The records of these specimens were previously unknown to the National Park Service (NPS). In the course of our searches, numerous obstacles were encountered that prevented us from fully completing our task. These ranged from difficulties with the way databases are structured, to poor record-keeping, to incomplete or incorrect information on the actual location of specimens within collections. Despite these problems, we are convinced that the information to be gained from such searches in invaluable, and we believe that our experience, and the recommendations we offer, may well prove instructive to others undertaking this kind of work.

  6. Non-redundant patent sequence databases with value-added annotations at two levels

    PubMed Central

    Li, Weizhong; McWilliam, Hamish; de la Torre, Ana Richart; Grodowski, Adam; Benediktovich, Irina; Goujon, Mickael; Nauche, Stephane; Lopez, Rodrigo

    2010-01-01

    The European Bioinformatics Institute (EMBL-EBI) provides public access to patent data, including abstracts, chemical compounds and sequences. Sequences can appear multiple times due to the filing of the same invention with multiple patent offices, or the use of the same sequence by different inventors in different contexts. Information relating to the source invention may be incomplete, and biological information available in patent documents elsewhere may not be reflected in the annotation of the sequence. Search and analysis of these data have become increasingly challenging for both the scientific and intellectual-property communities. Here, we report a collection of non-redundant patent sequence databases, which cover the EMBL-Bank nucleotides patent class and the patent protein databases and contain value-added annotations from patent documents. The databases were created at two levels by the use of sequence MD5 checksums. Sequences within a level-1 cluster are 100% identical over their whole length. Level-2 clusters were defined by sub-grouping level-1 clusters based on patent family information. Value-added annotations, such as publication number corrections, earliest publication dates and feature collations, significantly enhance the quality of the data, allowing for better tracking and cross-referencing. The databases are available format: http://www.ebi.ac.uk/patentdata/nr/. PMID:19884134

  7. Non-redundant patent sequence databases with value-added annotations at two levels.

    PubMed

    Li, Weizhong; McWilliam, Hamish; de la Torre, Ana Richart; Grodowski, Adam; Benediktovich, Irina; Goujon, Mickael; Nauche, Stephane; Lopez, Rodrigo

    2010-01-01

    The European Bioinformatics Institute (EMBL-EBI) provides public access to patent data, including abstracts, chemical compounds and sequences. Sequences can appear multiple times due to the filing of the same invention with multiple patent offices, or the use of the same sequence by different inventors in different contexts. Information relating to the source invention may be incomplete, and biological information available in patent documents elsewhere may not be reflected in the annotation of the sequence. Search and analysis of these data have become increasingly challenging for both the scientific and intellectual-property communities. Here, we report a collection of non-redundant patent sequence databases, which cover the EMBL-Bank nucleotides patent class and the patent protein databases and contain value-added annotations from patent documents. The databases were created at two levels by the use of sequence MD5 checksums. Sequences within a level-1 cluster are 100% identical over their whole length. Level-2 clusters were defined by sub-grouping level-1 clusters based on patent family information. Value-added annotations, such as publication number corrections, earliest publication dates and feature collations, significantly enhance the quality of the data, allowing for better tracking and cross-referencing. The databases are available format: http://www.ebi.ac.uk/patentdata/nr/.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calm, J.M.

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some maymore » be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.« less

  9. Evaluation of techniques for increasing recall in a dictionary approach to gene and protein name identification.

    PubMed

    Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A

    2007-06-01

    Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.

  10. Identification of Hospitalizations for Intentional Self-Harm when E-Codes are Incompletely Recorded

    PubMed Central

    Patrick, Amanda R.; Miller, Matthew; Barber, Catherine W.; Wang, Philip S.; Canning, Claire F.; Schneeweiss, Sebastian

    2010-01-01

    Context Suicidal behavior has gained attention as an adverse outcome of prescription drug use. Hospitalizations for intentional self-harm, including suicide, can be identified in administrative claims databases using external cause of injury codes (E-codes). However, rates of E-code completeness in US government and commercial claims databases are low due to issues with hospital billing software. Objective To develop an algorithm to identify intentional self-harm hospitalizations using recorded injury and psychiatric diagnosis codes in the absence of E-code reporting. Methods We sampled hospitalizations with an injury diagnosis (ICD-9 800–995) from 2 databases with high rates of E-coding completeness: 1999–2001 British Columbia, Canada data and the 2004 U.S. Nationwide Inpatient Sample. Our gold standard for intentional self-harm was a diagnosis of E950-E958. We constructed algorithms to identify these hospitalizations using information on type of injury and presence of specific psychiatric diagnoses. Results The algorithm that identified intentional self-harm hospitalizations with high sensitivity and specificity was a diagnosis of poisoning; toxic effects; open wound to elbow, wrist, or forearm; or asphyxiation; plus a diagnosis of depression, mania, personality disorder, psychotic disorder, or adjustment reaction. This had a sensitivity of 63%, specificity of 99% and positive predictive value (PPV) of 86% in the Canadian database. Values in the US data were 74%, 98%, and 73%. PPV was highest (80%) in patients under 25 and lowest those over 65 (44%). Conclusions The proposed algorithm may be useful for researchers attempting to study intentional self-harm in claims databases with incomplete E-code reporting, especially among younger populations. PMID:20922709

  11. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    PubMed

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  12. A Firefly Algorithm-based Approach for Pseudo-Relevance Feedback: Application to Medical Database.

    PubMed

    Khennak, Ilyes; Drias, Habiba

    2016-11-01

    The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.

  13. Global Data Toolset (GDT)

    USGS Publications Warehouse

    Cress, Jill J.; Riegle, Jodi L.

    2007-01-01

    According to the United Nations Environment Programme World Conservation Monitoring Centre (UNEP-WCMC) approximately 60 percent of the data contained in the World Database on Protected Areas (WDPA) has missing or incomplete boundary information. As a result, global analyses based on the WDPA can be inaccurate, and professionals responsible for natural resource planning and priority setting must rely on incomplete geospatial data sets. To begin to address this problem the World Data Center for Biodiversity and Ecology, in cooperation with the U. S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC), the National Biological Information Infrastructure (NBII), the Global Earth Observation System, and the Inter-American Biodiversity Information Network (IABIN) sponsored a Protected Area (PA) workshop in Asuncion, Paraguay, in November 2007. The primary goal of this workshop was to train representatives from eight South American countries on the use of the Global Data Toolset (GDT) for reviewing and editing PA data. Use of the GDT will allow PA experts to compare their national data to other data sets, including non-governmental organization (NGO) and WCMC data, in order to highlight inaccuracies or gaps in the data, and then to apply any needed edits, especially in the delineation of the PA boundaries. In addition, familiarizing the participants with the web-enabled GDT will allow them to maintain and improve their data after the workshop. Once data edits have been completed the GDT will also allow the country authorities to perform any required review and validation processing. Once validated, the data can be used to update the global WDPA and IABIN databases, which will enhance analysis on global and regional levels.

  14. LitVar: a semantic search engine for linking genomic variant data in PubMed and PMC.

    PubMed

    Allot, Alexis; Peng, Yifan; Wei, Chih-Hsuan; Lee, Kyubum; Phan, Lon; Lu, Zhiyong

    2018-05-14

    The identification and interpretation of genomic variants play a key role in the diagnosis of genetic diseases and related research. These tasks increasingly rely on accessing relevant manually curated information from domain databases (e.g. SwissProt or ClinVar). However, due to the sheer volume of medical literature and high cost of expert curation, curated variant information in existing databases are often incomplete and out-of-date. In addition, the same genetic variant can be mentioned in publications with various names (e.g. 'A146T' versus 'c.436G>A' versus 'rs121913527'). A search in PubMed using only one name usually cannot retrieve all relevant articles for the variant of interest. Hence, to help scientists, healthcare professionals, and database curators find the most up-to-date published variant research, we have developed LitVar for the search and retrieval of standardized variant information. In addition, LitVar uses advanced text mining techniques to compute and extract relationships between variants and other associated entities such as diseases and chemicals/drugs. LitVar is publicly available at https://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/LitVar.

  15. An evaluation of case completeness for New Zealand Coronial case files held on the Australasian National Coronial Information System (NCIS).

    PubMed

    Lilley, Rebbecca; Davie, Gabrielle; Wilson, Suzanne

    2016-10-01

    Large administrative databases provide powerful opportunities for examining the epidemiology of injury. The National Coronial Information System (NCIS) contains Coronial data from Australia and New Zealand (NZ); however, only closed cases are stored for NZ. This paper examines the completeness of NZ data within the NCIS and its impact upon the validity and utility of this database. A retrospective review of the capture of NZ cases of quad-related fatalities held in the NCIS was undertaken by identifying outstanding Coronial cases held on the NZ Coronial Management System (primary source of NZ Coronial data). NZ data held on the NCIS database were incomplete due to the non-capture of closed cases and the unavailability of open cases. Improvements to the information provided on the NCIS about the completeness of NZ data are needed to improve the validity of NCIS-derived findings and the overall utility of the NCIS for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Beyond PubMed: Searching the "Grey Literature" for Clinical Trial Results.

    PubMed

    Citrome, Leslie

    2014-07-01

    Clinical trial results have been traditionally communicated through the publication of scholarly reports and reviews in biomedical journals. However, this dissemination of information can be delayed or incomplete, making it difficult to appraise new treatments, or in the case of missing data, evaluate older interventions. Going beyond the routine search of PubMed, it is possible to discover additional information in the "grey literature." Examples of the grey literature include clinical trial registries, patent databases, company and industrywide repositories, regulatory agency digital archives, abstracts of paper and poster presentations on meeting/congress websites, industry investor reports and press releases, and institutional and personal websites.

  17. Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access

    ERIC Educational Resources Information Center

    Dadashzadeh, Mohammad

    2007-01-01

    Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…

  18. Pseudo-set framing.

    PubMed

    Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I

    2017-10-01

    Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. The MAR databases: development and implementation of databases specific for marine metagenomics

    PubMed Central

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen

    2018-01-01

    Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641

  20. Argonne Geothermal Geochemical Database v2.0

    DOE Data Explorer

    Harto, Christopher

    2013-05-22

    A database of geochemical data from potential geothermal sources aggregated from multiple sources as of March 2010. The database contains fields for the location, depth, temperature, pH, total dissolved solids concentration, chemical composition, and date of sampling. A separate tab contains data on non-condensible gas compositions. The database contains records for over 50,000 wells, although many entries are incomplete. Current versions of source documentation are listed in the dataset.

  1. PatGen--a consolidated resource for searching genetic patent sequences.

    PubMed

    Rouse, Richard J D; Castagnetto, Jesus; Niedner, Roland H

    2005-04-15

    Compared to the wealth of online resources covering genomic, proteomic and derived data the Bioinformatics community is rather underserved when it comes to patent information related to biological sequences. The current online resources are either incomplete or rather expensive. This paper describes, PatGen, an integrated database containing data from bioinformatic and patent resources. This effort addresses the inconsistency of publicly available genetic patent data coverage by providing access to a consolidated dataset. PatGen can be searched at http://www.patgendb.com rjdrouse@patentinformatics.com.

  2. Beyond PubMed: Searching the “Grey Literature” for Clinical Trial Results

    PubMed Central

    2014-01-01

    Clinical trial results have been traditionally communicated through the publication of scholarly reports and reviews in biomedical journals. However, this dissemination of information can be delayed or incomplete, making it difficult to appraise new treatments, or in the case of missing data, evaluate older interventions. Going beyond the routine search of PubMed, it is possible to discover additional information in the “grey literature.” Examples of the grey literature include clinical trial registries, patent databases, company and industrywide repositories, regulatory agency digital archives, abstracts of paper and poster presentations on meeting/congress websites, industry investor reports and press releases, and institutional and personal websites. PMID:25337445

  3. The Impact of Environment and Occupation on the Health and Safety of Active Duty Air Force Members: Database Development and De-Identification.

    PubMed

    Erich, Roger; Eaton, Melinda; Mayes, Ryan; Pierce, Lamar; Knight, Andrew; Genovesi, Paul; Escobar, James; Mychalczuk, George; Selent, Monica

    2016-08-01

    Preparing data for medical research can be challenging, detail oriented, and time consuming. Transcription errors, missing or nonsensical data, and records not applicable to the study population may hamper progress and, if unaddressed, can lead to erroneous conclusions. In addition, study data may be housed in multiple disparate databases and complex formats. Merging methods may be incomplete to obtain temporally synchronized data elements. We created a comprehensive database to explore the general hypothesis that environmental and occupational factors influence health outcomes and risk-taking behavior among active duty Air Force personnel. Several databases containing demographics, medical records, health survey responses, and safety incident reports were cleaned, validated, and linked to form a comprehensive, relational database. The final step involved removing and transforming personally identifiable information to form a Health Insurance Portability and Accountability Act compliant limited database. Initial data consisted of over 62.8 million records containing 221 variables. When completed, approximately 23.9 million clean and valid records with 214 variables remained. With a clean, robust database, future analysis aims to identify high-risk career fields for targeted interventions or uncover potential protective factors in low-risk career fields. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  4. Database for Parkinson Disease Mutations and Rare Variants

    DTIC Science & Technology

    2016-09-01

    AWARD NUMBER: W81XWH-14-1-0097 TITLE: “ Database for Parkinson Disease Mutations and Rare Variants” PRINCIPAL INVESTIGATOR: JEFFERY M. VANCE...TO THE ABOVE ADDRESS. 1. REPORT DATE September 2016 2. REPORT TYPE FINAL 3. DATES COVERED 1 Jul 2014 – 30 Jun 2016 4. TITLE AND SUBTITLE Database ...For Parkinson Disease (PD) specifically, the variant databases currently available are incomplete, don’t assess impact and/or are not equipped to

  5. The MAR databases: development and implementation of databases specific for marine metagenomics.

    PubMed

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. 43 CFR 46.125 - Incomplete or unavailable information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Incomplete or unavailable information. 46... THE NATIONAL ENVIRONMENTAL POLICY ACT OF 1969 Protection and Enhancement of Environmental Quality § 46.125 Incomplete or unavailable information. In circumstances where the provisions of 40 CFR 1502.22...

  7. 43 CFR 46.125 - Incomplete or unavailable information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 1 2013-10-01 2013-10-01 false Incomplete or unavailable information. 46... THE NATIONAL ENVIRONMENTAL POLICY ACT OF 1969 Protection and Enhancement of Environmental Quality § 46.125 Incomplete or unavailable information. In circumstances where the provisions of 40 CFR 1502.22...

  8. A Machine Reading System for Assembling Synthetic Paleontological Databases

    PubMed Central

    Peters, Shanan E.; Zhang, Ce; Livny, Miron; Ré, Christopher

    2014-01-01

    Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry. PMID:25436610

  9. Use of speech-to-text technology for documentation by healthcare providers.

    PubMed

    Ajami, Sima

    2016-01-01

    Medical records are a critical component of a patient's treatment. However, documentation of patient-related information is considered a secondary activity in the provision of healthcare services, often leading to incomplete medical records and patient data of low quality. Advances in information technology (IT) in the health system and registration of information in electronic health records (EHR) using speechto- text conversion software have facilitated service delivery. This narrative review is a literature search with the help of libraries, books, conference proceedings, databases of Science Direct, PubMed, Proquest, Springer, SID (Scientific Information Database), and search engines such as Yahoo, and Google. I used the following keywords and their combinations: speech recognition, automatic report documentation, voice to text software, healthcare, information, and voice recognition. Due to lack of knowledge of other languages, I searched all texts in English or Persian with no time limits. Of a total of 70, only 42 articles were selected. Speech-to-text conversion technology offers opportunities to improve the documentation process of medical records, reduce cost and time of recording information, enhance the quality of documentation, improve the quality of services provided to patients, and support healthcare providers in legal matters. Healthcare providers should recognize the impact of this technology on service delivery.

  10. Use of linkage to improve the completeness of the SIM and SINASC in the Brazilian capitals

    PubMed Central

    Maia, Lívia Teixeira de Souza; de Souza, Wayner Vieira; Mendes, Antonio da Cruz Gouveia; da Silva, Aline Galdino Soares

    2017-01-01

    ABSTRACT OBJECTIVE To analyze the contribution of linkage between databases of live births and infant mortality to improve the completeness of the variables common to the Mortality Information System (SIM) and the Live Birth Information System (SINASC) in Brazilian capitals in 2012. METHODS We studied 9,001 deaths of children under one year registered in the SIM in 2012 and 1,424,691 live births present in the SINASC in 2011 and 2012. The databases were related with linkage in two steps – deterministic and probabilistic. We calculated the percentage of incompleteness of the variables common to the SIM and SINASC before and after using the technique. RESULTS We could relate 90.8% of the deaths to their respective declarations of live birth, most of them paired deterministically. We found a higher percentage of pairs in Porto Alegre, Curitiba, and Campo Grande. In the capitals of the North region, the average of pairs was 84.2%; in the South region, this result reached 97.9%. The 11 variables common to the SIM and SINASC had 11,278 incomplete fields cumulatively, and we could recover 91.4% of the data after linkage. Before linkage, five variables presented excellent completeness in the SINASC in all Brazilian capitals, but only one variable had the same status in the SIM. After applying this technique, all 11 variables of the SINASC became excellent, while this occurred in seven variables of the SIM. The city of birth was significantly associated with the death component in the quality of the information. CONCLUSIONS Despite advances in the coverage and quality of the SIM and SINASC, problems in the completeness of the variables can still be identified, especially in the SIM. In this perspective, linkage can be used to qualify important information for the analysis of infant mortality. PMID:29211201

  11. Development of Integrated Information System for Travel Bureau Company

    NASA Astrophysics Data System (ADS)

    Karma, I. G. M.; Susanti, J.

    2018-01-01

    Related to the effectiveness of decision-making by the management of travel bureau company, especially by managers, information serves frequent delays or incomplete. Although already computer-assisted, the existing application-based is used only handle one particular activity only, not integrated. This research is intended to produce an integrated information system that handles the overall operational activities of the company. By applying the object-oriented system development approach, the system is built with Visual Basic. Net programming language and MySQL database package. The result is a system that consists of 4 (four) separated program packages, including Reservation System, AR System, AP System and Accounting System. Based on the output, we can conclude that this system is able to produce integrated information that related to the problem of reservation, operational and financial those produce up-to-date information in order to support operational activities and decisionmaking process by related parties.

  12. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.

    PubMed

    Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong

    2017-10-01

    There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Identification and correction of abnormal, incomplete and mispredicted proteins in public databases.

    PubMed

    Nagy, Alinda; Hegyi, Hédi; Farkas, Krisztina; Tordai, Hedvig; Kozma, Evelin; Bányai, László; Patthy, László

    2008-08-27

    Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i) conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii) presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii) co-occurrence of extracellular and nuclear domains; (iv) violation of domain integrity; (v) chimeras encoded by two or more genes located on different chromosomes. Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis) and two protostome species (Caenorhabditis elegans and Drosophila melanogaster) have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON-predicted entries. MisPred works efficiently in identifying errors in predictions generated by the most reliable gene prediction tools such as the EnsEMBL and NCBI's GNOMON pipelines and also guides the correction of errors. We suggest that application of the MisPred approach will significantly improve the quality of gene predictions and the associated databases.

  14. Barriers and facilitators to exchanging health information: a systematic review.

    PubMed

    Eden, Karen B; Totten, Annette M; Kassakian, Steven Z; Gorman, Paul N; McDonagh, Marian S; Devine, Beth; Pappas, Miranda; Daeges, Monica; Woods, Susan; Hersh, William R

    2016-04-01

    We conducted a systematic review of studies assessing facilitators and barriers to use of health information exchange (HIE). We searched MEDLINE, PsycINFO, CINAHL, and the Cochrane Library databases between January 1990 and February 2015 using terms related to HIE. English-language studies that identified barriers and facilitators of actual HIE were included. Data on study design, risk of bias, setting, geographic location, characteristics of the HIE, perceived barriers and facilitators to use were extracted and confirmed. Ten cross-sectional, seven multiple-site case studies, and two before-after studies that included data from several sources (surveys, interviews, focus groups, and observations of users) evaluated perceived barriers and facilitators to HIE use. The most commonly cited barriers to HIE use were incomplete information, inefficient workflow, and reports that the exchanged information that did not meet the needs of users. The review identified several facilitators to use. Incomplete patient information was consistently mentioned in the studies conducted in the US but not mentioned in the few studies conducted outside of the US that take a collective approach toward healthcare. Individual patients and practices in the US may exercise the right to participate (or not) in HIE which effects the completeness of patient information available to be exchanged. Workflow structure and user roles are key but understudied. We identified several facilitators in the studies that showed promise in promoting electronic health data exchange: obtaining more complete patient information; thoughtful workflow that folds in HIE; and inclusion of users early in implementation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. The potential application of European market research data in dietary exposure modelling of food additives.

    PubMed

    Tennant, David Robin; Bruyninckx, Chris

    2018-03-01

    Consumer exposure assessments for food additives are incomplete without information about the proportions of foods in each authorised category that contain the additive. Such information has been difficult to obtain but the Mintel Global New Products Database (GNPD) provides information about product launches across Europe over the past 20 years. These data can be searched to identify products with specific additives listed on product labels and the numbers compared with total product launches for food and drink categories in the same database to determine the frequency of occurrence. There are uncertainties associated with the data but these can be managed by adopting a cautious and conservative approach. GNPD data can be mapped with authorised food categories and with food descriptions used in the EFSA Comprehensive European Food Consumption Surveys Database for exposure modelling. The data, when presented as percent occurrence, could be incorporated into the EFSA ANS Panel's 'brand-loyal/non-brand loyal exposure model in a quantitative way. Case studies of preservative, antioxidant, colour and sweetener additives showed that the impact of including occurrence data is greatest in the non-brand loyal scenario. Recommendations for future research include identifying occurrence data for alcoholic beverages, linking regulatory food codes, FoodEx and GNPD product descriptions, developing the use of occurrence data for carry-over foods and improving understanding of brand loyalty in consumer exposure models.

  16. USGS Mineral Resources Program; national maps and datasets for research and land planning

    USGS Publications Warehouse

    Nicholson, S.W.; Stoeser, D.B.; Ludington, S.D.; Wilson, Frederic H.

    2001-01-01

    The U.S. Geological Survey, the Nation’s leader in producing and maintaining earth science data, serves as an advisor to Congress, the Department of the Interior, and many other Federal and State agencies. Nationwide datasets that are easily available and of high quality are critical for addressing a wide range of land-planning, resource, and environmental issues. Four types of digital databases (geological, geophysical, geochemical, and mineral occurrence) are being compiled and upgraded by the Mineral Resources Program on regional and national scales to meet these needs. Where existing data are incomplete, new data are being collected to ensure national coverage. Maps and analyses produced from these databases provide basic information essential for mineral resource assessments and environmental studies, as well as fundamental information for regional and national land-use studies. Maps and analyses produced from the databases are instrumental to ongoing basic research, such as the identification of mineral deposit origins, determination of regional background values of chemical elements with known environmental impact, and study of the relationships between toxic elements or mining practices to human health. As datasets are completed or revised, the information is made available through a variety of media, including the Internet. Much of the available information is the result of cooperative activities with State and other Federal agencies. The upgraded Mineral Resources Program datasets make geologic, geophysical, geochemical, and mineral occurrence information at the state, regional, and national scales available to members of Congress, State and Federal government agencies, researchers in academia, and the general public. The status of the Mineral Resources Program datasets is outlined below.

  17. Estimating inbreeding rates in natural populations: addressing the problem of incomplete pedigrees

    Treesearch

    Mark P. Miller; Susan M. Haig; Jonathan D. Ballou; Ashley Steel

    2017-01-01

    Understanding and estimating inbreeding is essential for managing threatened and endangered wildlife populations. However, determination of inbreeding rates in natural populations is confounded by incomplete parentage information. We present an approach for quantifying inbreeding rates for populations with incomplete parentage information. The approach exploits...

  18. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  20. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis.

    PubMed

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-08-08

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases.

  1. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis

    PubMed Central

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-01-01

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases. PMID:28786947

  2. Incomplete evidence: the inadequacy of databases in tracing published adverse drug reactions in clinical trials

    PubMed Central

    Derry, Sheena; Kong Loke, Yoon; Aronson, Jeffrey K

    2001-01-01

    Background We would expect information on adverse drug reactions in randomised clinical trials to be easily retrievable from specific searches of electronic databases. However, complete retrieval of such information may not be straightforward, for two reasons. First, not all clinical drug trials provide data on the frequency of adverse effects. Secondly, not all electronic records of trials include terms in the abstract or indexing fields that enable us to select those with adverse effects data. We have determined how often automated search methods, using indexing terms and/or textwords in the title or abstract, would fail to retrieve trials with adverse effects data. Methods We used a sample set of 107 trials known to report frequencies of adverse drug effects, and measured the proportion that (i) were not assigned the appropriate adverse effects indexing terms in the electronic databases, and (ii) did not contain identifiable adverse effects textwords in the title or abstract. Results Of the 81 trials with records on both MEDLINE and EMBASE, 25 were not indexed for adverse effects in either database. Twenty-six trials were indexed in one database but not the other. Only 66 of the 107 trials reporting adverse effects data mentioned this in the abstract or title of the paper. Simultaneous use of textword and indexing terms retrieved only 82/107 (77%) papers. Conclusions Specific search strategies based on adverse effects textwords and indexing terms will fail to identify nearly a quarter of trials that report on the rate of drug adverse effects. PMID:11591220

  3. Nuclear astrophysics in the laboratory and in the universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Champagne, A. E., E-mail: artc@physics.unc.edu; Iliadis, C.; Longland, R.

    Nuclear processes drive stellar evolution and so nuclear physics, stellar models and observations together allow us to describe the inner workings of stars and their life stories. This Information on nuclear reaction rates and nuclear properties are critical ingredients in addressing most questions in astrophysics and often the nuclear database is incomplete or lacking the needed precision. Direct measurements of astrophysically-interesting reactions are necessary and the experimental focus is on improving both sensitivity and precision. In the following, we review recent results and approaches taken at the Laboratory for Experimental Nuclear Astrophysics (LENA, http://research.physics.unc.edu/project/nuclearastro/Welcome.html )

  4. Thermal and Chemical Characterization of Composite Materials. MSFC Center Director's Discretionary Fund Final Report, Project No. ED36-18

    NASA Technical Reports Server (NTRS)

    Stanley, D. C.; Huff, T. L.

    2003-01-01

    The purpose of this research effort was to: (1) provide a concise and well-defined property profile of current and developing composite materials using thermal and chemical characterization techniques and (2) optimize analytical testing requirements of materials. This effort applied a diverse array of methodologies to ascertain composite material properties. Often, a single method of technique will provide useful, but nonetheless incomplete, information on material composition and/or behavior. To more completely understand and predict material properties, a broad-based analytical approach is required. By developing a database of information comprised of both thermal and chemical properties, material behavior under varying conditions may be better understood. THis is even more important in the aerospace community, where new composite materials and those in the development stage have little reference data. For example, Fourier transform infrared (FTIR) spectroscopy spectral databases available for identification of vapor phase spectra, such as those generated during experiments, generally refer to well-defined chemical compounds. Because this method renders a unique thermal decomposition spectral pattern, even larger, more diverse databases, such as those found in solid and liquid phase FTIR spectroscopy libraries, cannot be used. By combining this and other available methodologies, a database specifically for new materials and materials being developed at Marshall Space Flight Center can be generated . In addition, characterizing materials using this approach will be extremely useful in the verification of materials and identification of anomalies in NASA-wide investigations.

  5. Contact Tracing during an Outbreak of Ebola Virus Disease in the Western Area Districts of Sierra Leone: Lessons for Future Ebola Outbreak Response.

    PubMed

    Olu, Olushayo Oluseun; Lamunu, Margaret; Nanyunja, Miriam; Dafae, Foday; Samba, Thomas; Sempiira, Noah; Kuti-George, Fredson; Abebe, Fikru Zeleke; Sensasi, Benjamin; Chimbaru, Alexander; Ganda, Louisa; Gausi, Khoti; Gilroy, Sonia; Mugume, James

    2016-01-01

    Contact tracing is a critical strategy required for timely prevention and control of Ebola virus disease (EVD) outbreaks. Available evidence suggests that poor contact tracing was a driver of the EVD outbreak in West Africa, including Sierra Leone. In this article, we answered the question as to whether EVD contact tracing, as practiced in Western Area (WA) districts of Sierra Leone from 2014 to 2015, was effective. The goal is to describe contact tracing and identify obstacles to its effective implementation. Mixed methods comprising secondary data analysis of the EVD case and contact tracing data sets collected from WA during the period from 2014 to 2015, key informant interviews of contact tracers and their supervisors, and a review of available reports on contact tracing were implemented to obtain data for this study. During the study period, 3,838 confirmed cases and 32,706 contacts were listed in the viral hemorrhagic fever and contact databases for the district (mean 8.5 contacts per case). Only 22.1% (852) of the confirmed cases in the study area were listed as contacts at the onset of their illness, which indicates incomplete identification and tracing of contacts. Challenges associated with effective contact tracing included lack of community trust, concealing of exposure information, political interference with recruitment of tracers, inadequate training of contact tracers, and incomplete EVD case and contact database. While the tracers noted the usefulness of community quarantine in facilitating their work, they also reported delayed or irregular supply of basic needs, such as food and water, which created resistance from the communities. Multiple gaps in contact tracing attributed to a variety of factors associated with implementers, and communities were identified as obstacles that impeded timely control of the EVD outbreak in the WA of Sierra Leone. In future outbreaks, early community engagement and participation in contact tracing, establishment of appropriate mechanisms for selection, adequate training and supervision of qualified contact tracers, establishment of a well-managed and complete contact tracing database, and provision of basic needs to quarantined contacts are recommended as measures to enhance effective contact tracing.

  6. The roles of nearest neighbor methods in imputing missing data in forest inventory and monitoring databases

    Treesearch

    Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak

    2009-01-01

    Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...

  7. Correcting ligands, metabolites, and pathways

    PubMed Central

    Ott, Martin A; Vriend, Gert

    2006-01-01

    Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry) and that a considerable number (about one third) had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect) reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and visualization. It is freely available at provided that the copyright notice of all original data is cited. The database will be useful for querying and browsing biochemical pathways, and to obtain reference information for identifying compounds. However, these applications require that the underlying data be correct, and that is the focus of BioMeta. PMID:17132165

  8. Intelligent dental identification system (IDIS) in forensic medicine.

    PubMed

    Chomdej, T; Pankaow, W; Choychumroon, S

    2006-04-20

    This study reports the design and development of the intelligent dental identification system (IDIS), including its efficiency and reliability. Five hundred patients were randomly selected from the Dental Department at Police General Hospital in Thailand to create a population of 3000 known subjects. From the original 500 patients, 100 were randomly selected to create a sample of 1000 unidentifiable subjects (400 subjects with completeness and possible alterations of dental information corresponding to natural occurrences and general dental treatments after the last clinical examination, such as missing teeth, dental caries, dental restorations, and dental prosthetics, 100 subjects with completeness and no alteration of dental information, 500 subjects with incompleteness and no alteration of dental information). Attempts were made to identify the unknown subjects utilizing IDIS. The use of IDIS advanced method resulted in consistent outstanding identification in the range of 82.61-100% with minimal error 0-1.19%. The results of this study indicate that IDIS can be used to support dental identification. It supports not only all types of dentitions: primary, mixed, and permanent but also for incomplete and altered dental information. IDIS is particularly useful in providing the huge quantity and redundancy of related documentation associated with forensic odontology. As a computerized system, IDIS can reduce the time required for identification and store dental digital images with many processing features. Furthermore, IDIS establishes enhancements of documental dental record with odontogram and identification codes, electrical dental record with dental database system, and identification methods and algorithms. IDIS was conceptualized based on the guidelines and standards of the American Board of Forensic Odontology (ABFO) and International Criminal Police Organization (INTERPOL).

  9. Epidemiology, quality, and reporting characteristics of systematic reviews and meta-analyses of nursing interventions published in Chinese journals.

    PubMed

    Zhang, Juxia; Wang, Jiancheng; Han, Lin; Zhang, Fengwa; Cao, Jianxun; Ma, Yuxia

    2015-01-01

    Systematic reviews (SRs) and meta-analyses (MAs) of nursing interventions have become increasingly popular in China. This review provides the first examination of epidemiological characteristics of these SRs as well as compliance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses and Assessment of Multiple Systematic Reviews guidelines. The purpose of this study was to examine epidemiologic and reporting characteristics as well as the methodologic quality of SRs and MAs of nursing interventions published in Chinese journals. Four Chinese databases were searched (the Chinese Biomedicine Literature Database, Chinese Scientific Journal Full-text Database, Chinese Journal Full-text Database, and Wanfang Database) for SRs and MAs of nursing intervention from inception through June 2013. Data were extracted into Excel (Microsoft, Redmond, WA). The Assessment of Multiple Systematic Reviews and Preferred Reporting Items for Systematic Reviews and Meta-analyses checklists were used to assess methodologic quality and reporting characteristics, respectively. A total of 144 SRs were identified, most (97.2%) of which used "systematic review" or "meta-analyses" in the titles. None of the reviews had been updated. Nearly half (41%) were written by nurses, and more than half (61%) were reported in specialist journals. The most common conditions studied were endocrine, nutritional and metabolic diseases, and neoplasms. Most (70.8%) reported information about quality assessment, whereas less than half (25%) reported assessing for publication bias. None of the reviews reported a conflict of interest. Although many SRs of nursing interventions have been published in Chinese journals, the quality of these reviews is of concern. As a potential key source of information for nurses and nursing administrators, not only were many of these reviews incomplete in the information they provided, but also some results were misleading. Improving the quality of SRs of nursing interventions conducted and published by nurses in China is urgently needed in order to increase the value of these studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A database to manage flood risk in Catalonia

    NASA Astrophysics Data System (ADS)

    Echeverria, S.; Toldrà, R.; Verdaguer, I.

    2009-09-01

    We call priority action spots those local sites where heavy rain, increased river flow, sea storms and other flooding phenomena can cause human casualties or severe damage to property. Some examples are campsites, car parks, roads, chemical factories… In order to keep to a minimum the risk of these spots, both a prevention programme and an emergency response programme are required. The flood emergency plan of Catalonia (INUNCAT) prepared in 2005 included already a listing of priority action spots compiled by the Catalan Water Agency (ACA), which was elaborated taking into account past experience, hydraulic studies and information available by several knowledgeable sources. However, since land use evolves with time this listing of priority action spots has become outdated and incomplete. A new database is being built. Not only does this new database update and expand the previous listing, but adds to each entry information regarding prevention measures and emergency response: which spots are the most hazardous, under which weather conditions problems arise, which ones should have their access closed as soon as these conditions are forecast or actually given, which ones should be evacuated, who is in charge of the preventive actions or emergency response and so on. Carrying out this programme has to be done with the help and collaboration of all the organizations involved, foremost with the local authorities in the areas at risk. In order to achieve this goal a suitable geographical information system is necessary which can be easily used by all actors involved in this project. The best option has turned out to be the Spatial Data Infrastructure of Catalonia (IDEC), a platform to share spatial data on the Internet involving the Generalitat de Catalunya, Localret (a consortium of local authorities that promotes information technology) and other institutions.

  11. Frequency and risk factors for donor reactions in an anonymous blood donor survey.

    PubMed

    Goldman, Mindy; Osmond, Lori; Yi, Qi-Long; Cameron-Choi, Keltie; O'Brien, Sheila F

    2013-09-01

    Adverse donor reactions can result in injury and decrease the likelihood of donor return. Reaction reports captured in the blood center's database provide an incomplete picture of reaction rates and risk factors. We performed an anonymous survey, mailed to 40,000 donors in 2008, including questions about symptoms, height, weight, sex, and donation status. Reaction rates were compared to those recorded in our database. Possible risk factors were assessed for various reactions. The response rate was 45.5%. A total of 32% of first-time and 14% of repeat donors reported having any adverse symptom, most frequently bruising (84.9 per 1000 donors) or feeling faint or weak (66.2 per 1000). Faint reactions were two to eight times higher than reported in our database, although direct comparison was difficult. Younger age, female sex, and first-time donation status were risk factors for systemic and arm symptoms. In females, low estimated blood volume (EBV) was a risk factor for systemic symptoms. Only 51% of donors who consulted an outside physician also called Canadian Blood Services. A total of 10% of first-time donors with reactions found adverse effects information inadequate. This study allowed us to collect more information about adverse reactions, including minor symptoms and delayed reactions. Based on our findings of the risk factors and frequency of adverse reactions, we are implementing more stringent EBV criteria for younger donors and providing more detailed information to donors about possible adverse effects and their management. © 2012 American Association of Blood Banks.

  12. Comparison of reporting phase I trial results in ClinicalTrials.gov and matched publications.

    PubMed

    Shepshelovich, D; Goldvaser, H; Wang, L; Abdul Razak, A R; Bedard, P L

    2017-12-01

    Background Data on completeness of reporting of phase I cancer clinical trials in publications are lacking. Methods The ClinicalTrials.gov database was searched for completed adult phase I cancer trials with reported results. PubMed was searched for matching primary publications published prior to November 1, 2016. Reporting in primary publications was compared with the ClinicalTrials.gov database using a 28-point score (2=complete; 1=partial; 0=no reporting) for 14 items related to study design, outcome measures and safety profile. Inconsistencies between primary publications and ClinicalTrials.gov were recorded. Linear regression was used to identify factors associated with incomplete reporting. Results After a review of 583 trials in ClinicalTrials.gov , 163 matching primary publications were identified. Publications reported outcomes that did not appear in ClinicalTrials.gov in 25% of trials. Outcomes were upgraded, downgraded or omitted in publications in 47% of trials. The overall median reporting score was 23/28 (interquartile range 21-25). Incompletely reported items in >25% publications were: inclusion criteria (29%), primary outcome definition (26%), secondary outcome definitions (53%), adverse events (71%), serious adverse events (80%) and dates of study start and database lock (91%). Higher reporting scores were associated with phase I (vs phase I/II) trials (p<0.001), multicenter trials (p<0.001) and publication in journals with lower impact factor (p=0.004). Conclusions Reported results in primary publications for early phase cancer trials are frequently inconsistent or incomplete compared with ClinicalTrials.gov entries. ClinicalTrials.gov may provide more comprehensive data from new cancer drug trials.

  13. 32 CFR 651.44 - Incomplete information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... approaches or research methods generally accepted in the scientific community. ... 32 National Defense 4 2011-07-01 2011-07-01 false Incomplete information. 651.44 Section 651.44 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) ENVIRONMENTAL QUALITY...

  14. 32 CFR 651.44 - Incomplete information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... approaches or research methods generally accepted in the scientific community. ... 32 National Defense 4 2010-07-01 2010-07-01 true Incomplete information. 651.44 Section 651.44 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) ENVIRONMENTAL QUALITY...

  15. Assessing the Robustness of Graph Statistics for Network Analysis Under Incomplete Information

    DTIC Science & Technology

    strategy for dismantling these networks based on their network structure. However, these strategies typically assume complete information about the...combat them with missing information . This thesis analyzes the performance of a variety of network statistics in the context of incomplete information by...leveraging simulation to remove nodes and edges from networks and evaluating the effect this missing information has on our ability to accurately

  16. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, M.; Hu, N. Q.; Qin, G. J.

    2011-07-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  17. Crime event 3D reconstruction based on incomplete or fragmentary evidence material--case report.

    PubMed

    Maksymowicz, Krzysztof; Tunikowski, Wojciech; Kościuk, Jacek

    2014-09-01

    Using our own experience in 3D analysis, the authors will demonstrate the possibilities of 3D crime scene and event reconstruction in cases where originally collected material evidence is largely insufficient. The necessity to repeat forensic evaluation is often down to the emergence of new facts in the course of case proceedings. Even in cases when a crime scene and its surroundings have undergone partial or complete transformation, with regard to elements significant to the course of the case, or when the scene was not satisfactorily secured, it is still possible to reconstruct it in a 3D environment based on the originally-collected, even incomplete, material evidence. In particular cases when no image of the crime scene is available, its partial or even full reconstruction is still potentially feasible. Credibility of evidence for such reconstruction can still satisfy the evidence requirements in court. Reconstruction of the missing elements of the crime scene is still possible with the use of information obtained from current publicly available databases. In the study, we demonstrate that these can include Google Maps(®*), Google Street View(®*) and available construction and architecture archives. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Evaluation of cross-cultural adaptation and measurement properties of breast cancer-specific quality-of-life questionnaires: a systematic review.

    PubMed

    Oliveira, Indiara Soares; da Cunha Menezes Costa, Lucíola; Fagundes, Felipe Ribeiro Cabral; Cabral, Cristina Maria Nunes

    2015-05-01

    To assess the procedures of translation, cross-cultural adaptation, and measurement properties of breast cancer-specific quality-of-life questionnaires. Searches were conducted in the databases MEDLINE, EMBASE, CINAHL, and SciELO using the keywords: "Questionnaires," "Quality of life," and "Breast cancer." The studies were analyzed in terms of methodological quality according to the guidelines for the procedure of cross-cultural adaptation and the quality criteria for measurement properties of questionnaires. We found 24 eligible studies. Most of the articles assessed the translation and measurement properties of the instrument EORTC QLQ-BR23. Description about translation and cross-cultural adaptation was incomplete in 11 studies. Translation and back translation were the most tested phases, and synthesis of the translation was the most omitted phase in the articles. Information on assessing measurement properties was provided incompletely in 23 articles. Internal consistency was the most tested property in all of the eligible articles, but none of them provided information on agreement. Construct validity was adequately tested in only three studies that used the FACT-B and QLQ-BR23. Eight articles provided information on reliability; however, only four found positive classification. Responsiveness was tested in four articles, and ceiling and floor effects were tested in only three articles. None of the instruments showed fully adequate quality. There is limited evidence on cross-cultural adaptations and measurement properties; therefore, it is recommended that caution be exercised when using breast cancer-specific quality-of-life questionnaires that have been translated, adapted, and tested.

  19. A functional-dependencies-based Bayesian networks learning method and its application in a mobile commerce system.

    PubMed

    Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi

    2006-06-01

    This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.

  20. Guidebooks for estimating total transit usage through extrapolating incomplete counts : final report.

    DOT National Transportation Integrated Search

    2016-09-01

    This report provides guidance for transit agencies to estimate transit usage for reporting to the National Transit : Database (NTD) when their counting procedure that is designed to perform full counts misses some trips. Transit usage : refers to unl...

  1. A comparison of accuracy and computational feasibility of two record linkage algorithms in retrieving vital status information from HIV/AIDS patients registered in Brazilian public databases.

    PubMed

    de Paula, Adelzon Assis; Pires, Denise Franqueira; Filho, Pedro Alves; de Lemos, Kátia Regina Valente; Barçante, Eduardo; Pacheco, Antonio Guilherme

    2018-06-01

    While cross-referencing information from people living with HIV/AIDS (PLWHA) to the official mortality database is a critical step in monitoring the HIV/AIDS epidemic in Brazil, the accuracy of the linkage routine may compromise the validity of the final database, yielding to biased epidemiological estimates. We compared the accuracy and the total runtime of two linkage algorithms applied to retrieve vital status information from PLWHA in Brazilian public databases. Nominally identified records from PLWHA were obtained from three distinct government databases. Linkage routines included an algorithm in Python language (PLA) and Reclink software (RlS), a probabilistic software largely utilized in Brazil. Records from PLWHA 1 known to be alive were added to those from patients reported as deceased. Data were then searched into the mortality system. Scenarios where 5% and 50% of patients actually dead were simulated, considering both complete cases and 20% missing maternal names. When complete information was available both algorithms had comparable accuracies. In the scenario of 20% missing maternal names, PLA 2 and RlS 3 had sensitivities of 94.5% and 94.6% (p > 0.5), respectively; after manual reviewing, PLA sensitivity increased to 98.4% (96.6-100.0) exceeding that for RlS (p < 0.01). PLA had higher positive predictive value in 5% death proportion. Manual reviewing was intrinsically required by RlS in up to 14% register for people actually dead, whereas the corresponding proportion ranged from 1.5% to 2% for PLA. The lack of manual inspection did not alter PLA sensitivity when complete information was available. When incomplete data was available PLA sensitivity increased from 94.5% to 98.4%, thus exceeding that presented by RlS (94.6%, p < 0.05). RlS spanned considerably less processing time compared to PLA. Both linkage algorithms presented interchangeable accuracies in retrieving vital status data from PLWHA. RlS had a considerably lesser runtime but intrinsically required manually reviewing a fastidious proportion of the matched registries. On the other hand, PLA spent quite more runtime but spared manual reviewing at no expense of accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. 16 CFR 1702.4 - Petitions with insufficient or incomplete information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Petitions with insufficient or incomplete information. 1702.4 Section 1702.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS...

  3. 16 CFR 1702.4 - Petitions with insufficient or incomplete information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Petitions with insufficient or incomplete information. 1702.4 Section 1702.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS...

  4. 16 CFR 1702.4 - Petitions with insufficient or incomplete information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Petitions with insufficient or incomplete information. 1702.4 Section 1702.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS...

  5. 16 CFR 1702.4 - Petitions with insufficient or incomplete information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Petitions with insufficient or incomplete information. 1702.4 Section 1702.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS...

  6. A spatial database of wildfires in the United States, 1992-2011

    NASA Astrophysics Data System (ADS)

    Short, K. C.

    2013-07-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record-keeping exists. To conduct even the most rudimentary interagency analyses of wildfire numbers and area burned from the authoritative systems of record, one must harvest records from dozens of disparate databases with inconsistent information content. The onus is then on the user to check for and purge redundant records of the same fire (i.e. multijurisdictional incidents with responses reported by several agencies or departments) after pooling data from different sources. Here we describe our efforts to acquire, standardize, error-check, compile, scrub, and evaluate the completeness of US federal, state, and local wildfire records from 1992-2011 for the national, interagency Fire Program Analysis (FPA) application. The resulting FPA Fire-occurrence Database (FPA FOD) includes nearly 1.6 million records from the 20 yr period, with values for at least the following core data elements: location at least as precise as a Public Land Survey System section (2.6 km2 grid), discovery date, and final fire size. The FPA FOD is publicly available from the Research Data Archive of the US Department of Agriculture, Forest Service (doi:10.2737/RDS-2013-0009). While necessarily incomplete in some aspects, the database is intended to facilitate fairly high-resolution geospatial analysis of US wildfire activity over the past two decades, based on available information from the authoritative systems of record.

  7. A spatial database of wildfires in the United States, 1992-2011

    NASA Astrophysics Data System (ADS)

    Short, K. C.

    2014-01-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record keeping exists. To conduct even the most rudimentary interagency analyses of wildfire numbers and area burned from the authoritative systems of record, one must harvest records from dozens of disparate databases with inconsistent information content. The onus is then on the user to check for and purge redundant records of the same fire (i.e., multijurisdictional incidents with responses reported by several agencies or departments) after pooling data from different sources. Here we describe our efforts to acquire, standardize, error-check, compile, scrub, and evaluate the completeness of US federal, state, and local wildfire records from 1992-2011 for the national, interagency Fire Program Analysis (FPA) application. The resulting FPA Fire-Occurrence Database (FPA FOD) includes nearly 1.6 million records from the 20 yr period, with values for at least the following core data elements: location, at least as precise as a Public Land Survey System section (2.6 km2 grid), discovery date, and final fire size. The FPA FOD is publicly available from the Research Data Archive of the US Department of Agriculture, Forest Service (doi:10.2737/RDS-2013-0009). While necessarily incomplete in some aspects, the database is intended to facilitate fairly high-resolution geospatial analysis of US wildfire activity over the past two decades, based on available information from the authoritative systems of record.

  8. 43 CFR 46.125 - Incomplete or unavailable information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 1 2014-10-01 2014-10-01 false Incomplete or unavailable information. 46.125 Section 46.125 Public Lands: Interior Office of the Secretary of the Interior IMPLEMENTATION OF THE NATIONAL ENVIRONMENTAL POLICY ACT OF 1969 Protection and Enhancement of Environmental Quality § 46...

  9. 43 CFR 46.125 - Incomplete or unavailable information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 1 2012-10-01 2011-10-01 true Incomplete or unavailable information. 46.125 Section 46.125 Public Lands: Interior Office of the Secretary of the Interior IMPLEMENTATION OF THE NATIONAL ENVIRONMENTAL POLICY ACT OF 1969 Protection and Enhancement of Environmental Quality § 46...

  10. 16 CFR § 1702.4 - Petitions with insufficient or incomplete information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Petitions with insufficient or incomplete information. § 1702.4 Section § 1702.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT...

  11. CDSbank: taxonomy-aware extraction, selection, renaming and formatting of protein-coding DNA or amino acid sequences.

    PubMed

    Hazes, Bart

    2014-02-28

    Protein-coding DNA sequences and their corresponding amino acid sequences are routinely used to study relationships between sequence, structure, function, and evolution. The rapidly growing size of sequence databases increases the power of such comparative analyses but it makes it more challenging to prepare high quality sequence data sets with control over redundancy, quality, completeness, formatting, and labeling. Software tools for some individual steps in this process exist but manual intervention remains a common and time consuming necessity. CDSbank is a database that stores both the protein-coding DNA sequence (CDS) and amino acid sequence for each protein annotated in Genbank. CDSbank also stores Genbank feature annotation, a flag to indicate incomplete 5' and 3' ends, full taxonomic data, and a heuristic to rank the scientific interest of each species. This rich information allows fully automated data set preparation with a level of sophistication that aims to meet or exceed manual processing. Defaults ensure ease of use for typical scenarios while allowing great flexibility when needed. Access is via a free web server at http://hazeslab.med.ualberta.ca/CDSbank/. CDSbank presents a user-friendly web server to download, filter, format, and name large sequence data sets. Common usage scenarios can be accessed via pre-programmed default choices, while optional sections give full control over the processing pipeline. Particular strengths are: extract protein-coding DNA sequences just as easily as amino acid sequences, full access to taxonomy for labeling and filtering, awareness of incomplete sequences, and the ability to take one protein sequence and extract all synonymous CDS or identical protein sequences in other species. Finally, CDSbank can also create labeled property files to, for instance, annotate or re-label phylogenetic trees.

  12. Top-k similar graph matching using TraM in biological networks.

    PubMed

    Amin, Mohammad Shafkat; Finley, Russell L; Jamil, Hasan M

    2012-01-01

    Many emerging database applications entail sophisticated graph-based query manipulation, predominantly evident in large-scale scientific applications. To access the information embedded in graphs, efficient graph matching tools and algorithms have become of prime importance. Although the prohibitively expensive time complexity associated with exact subgraph isomorphism techniques has limited its efficacy in the application domain, approximate yet efficient graph matching techniques have received much attention due to their pragmatic applicability. Since public domain databases are noisy and incomplete in nature, inexact graph matching techniques have proven to be more promising in terms of inferring knowledge from numerous structural data repositories. In this paper, we propose a novel technique called TraM for approximate graph matching that off-loads a significant amount of its processing on to the database making the approach viable for large graphs. Moreover, the vector space embedding of the graphs and efficient filtration of the search space enables computation of approximate graph similarity at a throw-away cost. We annotate nodes of the query graphs by means of their global topological properties and compare them with neighborhood biased segments of the datagraph for proper matches. We have conducted experiments on several real data sets, and have demonstrated the effectiveness and efficiency of the proposed method

  13. Maximum demand charge rates for commercial and industrial electricity tariffs in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaren, Joyce; Gagnon, Pieter; Zimny-Schmitt, Daniel

    NREL has assembled a list of U.S. retail electricity tariffs and their associated demand charge rates for the Commercial and Industrial sectors. The data was obtained from the Utility Rate Database. Keep the following information in mind when interpreting the data: (1) These data were interpreted and transcribed manually from utility tariff sheets, which are often complex. It is a certainty that these data contain errors, and therefore should only be used as a reference. Actual utility tariff sheets should be consulted if an action requires this type of data. (2) These data only contains tariffs that were entered intomore » the Utility Rate Database. Since not all tariffs are designed in a format that can be entered into the Database, this list is incomplete - it does not contain all tariffs in the United States. (3) These data may have changed since this list was developed (4) Many of the underlying tariffs have additional restrictions or requirements that are not represented here. For example, they may only be available to the agricultural sector or closed to new customers. (5) If there are multiple demand charge elements in a given tariff, the maximum demand charge is the sum of each of the elements at any point in time. Where tiers were present, the highest rate tier was assumed. The value is a maximum for the year, and may be significantly different from demand charge rates at other times in the year. Utility Rate Database: https://openei.org/wiki/Utility_Rate_Database« less

  14. Estimation of Anonymous Email Network Characteristics through Statistical Disclosure Attacks

    PubMed Central

    Portela, Javier; García Villalba, Luis Javier; Silva Trujillo, Alejandra Guadalupe; Sandoval Orozco, Ana Lucila; Kim, Tai-Hoon

    2016-01-01

    Social network analysis aims to obtain relational data from social systems to identify leaders, roles, and communities in order to model profiles or predict a specific behavior in users’ network. Preserving anonymity in social networks is a subject of major concern. Anonymity can be compromised by disclosing senders’ or receivers’ identity, message content, or sender-receiver relationships. Under strongly incomplete information, a statistical disclosure attack is used to estimate the network and node characteristics such as centrality and clustering measures, degree distribution, and small-world-ness. A database of email networks in 29 university faculties is used to study the method. A research on the small-world-ness and Power law characteristics of these email networks is also developed, helping to understand the behavior of small email networks. PMID:27809275

  15. Estimation of Anonymous Email Network Characteristics through Statistical Disclosure Attacks.

    PubMed

    Portela, Javier; García Villalba, Luis Javier; Silva Trujillo, Alejandra Guadalupe; Sandoval Orozco, Ana Lucila; Kim, Tai-Hoon

    2016-11-01

    Social network analysis aims to obtain relational data from social systems to identify leaders, roles, and communities in order to model profiles or predict a specific behavior in users' network. Preserving anonymity in social networks is a subject of major concern. Anonymity can be compromised by disclosing senders' or receivers' identity, message content, or sender-receiver relationships. Under strongly incomplete information, a statistical disclosure attack is used to estimate the network and node characteristics such as centrality and clustering measures, degree distribution, and small-world-ness. A database of email networks in 29 university faculties is used to study the method. A research on the small-world-ness and Power law characteristics of these email networks is also developed, helping to understand the behavior of small email networks.

  16. Limit Pricing with Incomplete Information: Answers to Frequently Asked Questions

    ERIC Educational Resources Information Center

    Sorenson, Timothy L.

    2004-01-01

    Strategic pricing is an important and exciting topic in industrial organization and the economics of strategy. A wide range of texts use what has become a standard version of the Milgrom and Roberts (1982a) limit-pricing model to convey the essential ideas of strategic pricing under incomplete information. In addition to providing a formal, but…

  17. Gluten content of medications.

    PubMed

    Cruz, Joseph E; Cocchio, Craig; Lai, Pak Tsun; Hermes-DeSantis, Evelyn

    2015-01-01

    The establishment of a database for the identification of the presence of gluten in excipients of prescription medications is described. While resources are available to ascertain the gluten content of a given medication, these resources are incomplete and often do not contain a source and date of contact. The drug information service (DIS) at Robert Wood Johnson University Hospital (RWJUH) determined that directly contacting the manufacturer of a product is the best method to determine the gluten content of medications. The DIS sought to establish a resource for use within the institution and create directions for obtaining this information from manufacturers to ensure uniformity of the data collected. To determine the gluten content of a medication, the DIS analyzed the manufacturer's package insert to identify any statement indicating that the product contained gluten or inactive ingredients from known sources of gluten. If there was any question about the source of an inactive ingredient or if no information about gluten content appeared in the package insert, the manufacturer of the particular formulation of the queried medication was contacted to provide clarification. Manufacturers' responses were collected, and medications were categorized as "gluten free," "contains gluten," or "possibly contains gluten." To date, the DIS at RWJUH has received queries about 84 medications and has cataloged their gluten content. The DIS at RWJUH developed a database that categorizes the gluten status of medications, allowing clinicians to easily identify drugs that are safe for patients with celiac disease. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro

    2014-05-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates

  19. Long-range (fractal) correlations in the LEDA database.

    NASA Astrophysics Data System (ADS)

    di Nella, H.; Montuori, M.; Paturel, G.; Pietronero, L.; Sylos Labini, F.

    1996-04-01

    All the recent redshift surveys show highly irregular patterns of galaxies on scales of hundreds of megaparsecs such as chains, walls and cells. One of the most powerful catalog of galaxies is represented by the LEDA database that contains more than 36,000 galaxies with redshift. We study the correlation properties of such a sample finding that galaxy distribution shows well defined fractal nature up to R_S_~150h^-1^Mpc with fractal dimension D~2. We test the consistency of these results versus the incompleteness in the sample.

  20. Processing medical data: a systematic review

    PubMed Central

    2013-01-01

    Background Medical data recording is one of the basic clinical tools. Electronic Health Record (EHR) is important for data processing, communication, efficiency and effectiveness of patients’ information access, confidentiality, ethical and/or legal issues. Clinical record promote and support communication among service providers and hence upscale quality of healthcare. Qualities of records are reflections of the quality of care patients offered. Methods Qualitative analysis was undertaken for this systematic review. We reviewed 40 materials Published from 1999 to 2013. We searched these materials from databases including ovidMEDLINE and ovidEMBASE. Two reviewers independently screened materials on medical data recording, documentation and information processing and communication. Finally, all selected references were summarized, reconciled and compiled as one compatible document. Result Patients were dying and/or getting much suffering as the result of poor quality medical records. Electronic health record minimizes errors, saves unnecessary time, and money wasted on processing medical data. Conclusion Many countries have been complaining for incompleteness, inappropriateness and illegibility of records. Therefore creating awareness on the magnitude of the problem has paramount importance. Hence available correct patient information has lots of potential in reducing errors and support roles. PMID:24107106

  1. Experiments and improvements of ear recognition based on local texture descriptors

    NASA Astrophysics Data System (ADS)

    Benzaoui, Amir; Adjabi, Insaf; Boukrouche, Abdelhani

    2017-04-01

    The morphology of the human ear presents rich and stable information embedded on the curved 3-D surface and has as a result attracted considerable attention from forensic scientists and engineers as a biometric recognition modality. However, recognizing a person's identity from the morphology of the human ear in unconstrained environments, with insufficient and incomplete training data, strong person-specificity, and high within-range variance, can be very challenging. Following our previous work on ear recognition based on local texture descriptors, we propose to use anatomical and embryological information about the human ear in order to find the autonomous components and the locations where large interindividual variations can be detected. Embryology is particularly relevant to our approach as it provides information on the possible changes that can be observed in the external structure of the ear. We experimented with three publicly available databases, namely: IIT Delhi-1, IIT Delhi-2, and USTB-1, consisting of several ear benchmarks acquired under varying conditions and imaging qualities. The experiments show excellent results, beyond the state of the art.

  2. Stochastic Online Learning in Dynamic Networks under Unknown Models

    DTIC Science & Technology

    2016-08-02

    Repeated Game with Incomplete Information, IEEE International Conference on Acoustics, Speech, and Signal Processing. 20-MAR-16, Shanghai, China...in a game theoretic framework for the application of multi-seller dynamic pricing with unknown demand models. We formulated the problem as an...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning

  3. On Belief State Representation and Its Application in Planning with Incomplete Information, Nondeterministic Actions, and Sensing Actions

    ERIC Educational Resources Information Center

    To, Son Thanh

    2012-01-01

    "Belief state" refers to the set of possible world states satisfying the agent's (usually imperfect) knowledge. The use of belief state allows the agent to reason about the world with incomplete information, by considering each possible state in the belief state individually, in the same way as if it had perfect knowledge. However, the…

  4. Youth self-report of child maltreatment in representative surveys: a systematic review

    PubMed Central

    Jessica, Laurin*; Caroline, Wallace*; Jasminka, Draca; Sarah, Aterman; Lil, Tonmyr

    2018-01-01

    Abstract Introduction: This systematic review identified population-representative youth surveys containing questions on self-reported child maltreatment. Data quality and ethical issues pertinent to maltreatment data collection were also examined. Methods: A search was conducted of relevant online databases for articles published from January 2000 through March 2016 reporting on population-representative data measuring child maltreatment. Inclusion criteria were established a priori; two reviewers independently assessed articles to ensure that the criteria were met and to verify the accuracy of extracted information. Results: A total of 73 articles reporting on 71 surveys met the inclusion criteria. A variety of strategies to ensure accurate information and to mitigate survey participants’ distress were reported. Conclusion: The extent to which efforts have been undertaken to measure the prevalence of child maltreatment reflects its perceived importance across the world. Data on child maltreatment can be effectively collected from youth, although our knowledge of best practices related to ethics and data quality is incomplete. PMID:29443484

  5. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases.

    PubMed

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.

  6. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  7. An information propagation model considering incomplete reading behavior in microblog

    NASA Astrophysics Data System (ADS)

    Su, Qiang; Huang, Jiajia; Zhao, Xiande

    2015-02-01

    Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.

  8. Improving imperfect data from health management information systems in Africa using space-time geostatistics.

    PubMed

    Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M

    2006-06-01

    Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels.

  9. Improving Imperfect Data from Health Management Information Systems in Africa Using Space–Time Geostatistics

    PubMed Central

    Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A. A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M

    2006-01-01

    Background Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. Methods and Findings This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. Conclusions The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels. PMID:16719557

  10. Digital contract approach for consistent and predictable multimedia information delivery in electronic commerce

    NASA Astrophysics Data System (ADS)

    Konana, Prabhudev; Gupta, Alok; Whinston, Andrew B.

    1997-01-01

    A pure 'technological' solution to network quality problems is incomplete since any benefits from new technologies are offset by the demand from exponentially growing electronic commerce ad data-intensive applications. SInce an economic paradigm is implicit in electronic commerce, we propose a 'market-system' approach to improve quality of service. Quality of service for digital products takes on a different meaning since users view quality of service differently and value information differently. We propose a framework for electronic commerce that is based on an economic paradigm and mass-customization, and works as a wide-area distributed management system. In our framework, surrogate-servers act as intermediaries between information provides and end- users, and arrange for consistent and predictable information delivery through 'digital contracts.' These contracts are negotiated and priced based on economic principles. Surrogate servers pre-fetched, through replication, information from many different servers and consolidate based on demand expectations. In order to recognize users' requirements and process requests accordingly, real-time databases are central to our framework. We also propose that multimedia information be separated into slowly changing and rapidly changing data streams to improve response time requirements. Surrogate- servers perform the tasks of integration of these data streams that is transparent to end-users.

  11. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases

    PubMed Central

    Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general. PMID:26038727

  12. Distributed control systems with incomplete and uncertain information

    NASA Astrophysics Data System (ADS)

    Tang, Jingpeng

    Scientific and engineering advances in wireless communication, sensors, propulsion, and other areas are rapidly making it possible to develop unmanned air vehicles (UAVs) with sophisticated capabilities. UAVs have come to the forefront as tools for airborne reconnaissance to search for, detect, and destroy enemy targets in relatively complex environments. They potentially reduce risk to human life, are cost effective, and are superior to manned aircraft for certain types of missions. It is desirable for UAVs to have a high level of intelligent autonomy to carry out mission tasks with little external supervision and control. This raises important issues involving tradeoffs between centralized control and the associated potential to optimize mission plans, and decentralized control with great robustness and the potential to adapt to changing conditions. UAV capabilities have been extended several ways through armament (e.g., Hellfire missiles on Predator UAVs), increased endurance and altitude (e.g., Global Hawk), and greater autonomy. Some known barriers to full-scale implementation of UAVs are increased communication and control requirements as well as increased platform and system complexity. One of the key problems is how UAV systems can handle incomplete and uncertain information in dynamic environments. Especially when the system is composed of heterogeneous and distributed UAVs, the overall system complexity is increased under such conditions. Presented through the use of published papers, this dissertation lays the groundwork for the study of methodologies for handling incomplete and uncertain information for distributed control systems. An agent-based simulation framework is built to investigate mathematical approaches (optimization) and emergent intelligence approaches. The first paper provides a mathematical approach for systems of UAVs to handle incomplete and uncertain information. The second paper describes an emergent intelligence approach for UAVs, again in handling incomplete and uncertain information. The third paper combines mathematical and emergent intelligence approaches.

  13. The Effectiveness and Safety of Exoskeletons as Assistive and Rehabilitation Devices in the Treatment of Neurologic Gait Disorders in Patients with Spinal Cord Injury: A Systematic Review

    PubMed Central

    Fisahn, Christian; Aach, Mirko; Jansen, Oliver; Moisi, Marc; Mayadev, Angeli; Pagarigan, Krystle T.; Dettori, Joseph R.; Schildhauer, Thomas A.

    2016-01-01

    Study Design Systematic review. Clinical Questions (1) When used as an assistive device, do wearable exoskeletons improve lower extremity function or gait compared with knee-ankle-foot orthoses (KAFOs) in patients with complete or incomplete spinal cord injury? (2) When used as a rehabilitation device, do wearable exoskeletons improve lower extremity function or gait compared with other rehabilitation strategies in patients with complete or incomplete spinal cord injury? (3) When used as an assistive or rehabilitation device, are wearable exoskeletons safe compared with KAFO for assistance or other rehabilitation strategies for rehabilitation in patients with complete or incomplete spinal cord injury? Methods PubMed, Cochrane, and Embase databases and reference lists of key articles were searched from database inception to May 2, 2016, to identify studies evaluating the effectiveness of wearable exoskeletons used as assistive or rehabilitative devices in patients with incomplete or complete spinal cord injury. Results No comparison studies were found evaluating exoskeletons as an assistive device. Nine comparison studies (11 publications) evaluated the use of exoskeletons as a rehabilitative device. The 10-meter walk test velocity and Spinal Cord Independence Measure scores showed no difference in change from baseline among patients undergoing exoskeleton training compared with various comparator therapies. The remaining primary outcome measures of 6-minute walk test distance and Walking Index for Spinal Cord Injury I and II and Functional Independence Measure–Locomotor scores showed mixed results, with some studies indicating no difference in change from baseline between exoskeleton training and comparator therapies, some indicating benefit of exoskeleton over comparator therapies, and some indicating benefit of comparator therapies over exoskeleton. Conclusion There is no data to compare locomotion assistance with exoskeleton versus conventional KAFOs. There is no consistent benefit from rehabilitation using an exoskeleton versus a variety of conventional methods in patients with chronic spinal cord injury. Trials comparing later-generation exoskeletons are needed. PMID:27853668

  14. Data Sources for Trait Databases: Comparing the Phenomic Content of Monographs and Evolutionary Matrices.

    PubMed

    Dececchi, T Alex; Mabee, Paula M; Blackburn, David C

    2016-01-01

    Databases of organismal traits that aggregate information from one or multiple sources can be leveraged for large-scale analyses in biology. Yet the differences among these data streams and how well they capture trait diversity have never been explored. We present the first analysis of the differences between phenotypes captured in free text of descriptive publications ('monographs') and those used in phylogenetic analyses ('matrices'). We focus our analysis on osteological phenotypes of the limbs of four extinct vertebrate taxa critical to our understanding of the fin-to-limb transition. We find that there is low overlap between the anatomical entities used in these two sources of phenotype data, indicating that phenotypes represented in matrices are not simply a subset of those found in monographic descriptions. Perhaps as expected, compared to characters found in matrices, phenotypes in monographs tend to emphasize descriptive and positional morphology, be somewhat more complex, and relate to fewer additional taxa. While based on a small set of focal taxa, these qualitative and quantitative data suggest that either source of phenotypes alone will result in incomplete knowledge of variation for a given taxon. As a broader community develops to use and expand databases characterizing organismal trait diversity, it is important to recognize the limitations of the data sources and develop strategies to more fully characterize variation both within species and across the tree of life.

  15. Data Sources for Trait Databases: Comparing the Phenomic Content of Monographs and Evolutionary Matrices

    PubMed Central

    Dececchi, T. Alex; Mabee, Paula M.; Blackburn, David C.

    2016-01-01

    Databases of organismal traits that aggregate information from one or multiple sources can be leveraged for large-scale analyses in biology. Yet the differences among these data streams and how well they capture trait diversity have never been explored. We present the first analysis of the differences between phenotypes captured in free text of descriptive publications (‘monographs’) and those used in phylogenetic analyses (‘matrices’). We focus our analysis on osteological phenotypes of the limbs of four extinct vertebrate taxa critical to our understanding of the fin-to-limb transition. We find that there is low overlap between the anatomical entities used in these two sources of phenotype data, indicating that phenotypes represented in matrices are not simply a subset of those found in monographic descriptions. Perhaps as expected, compared to characters found in matrices, phenotypes in monographs tend to emphasize descriptive and positional morphology, be somewhat more complex, and relate to fewer additional taxa. While based on a small set of focal taxa, these qualitative and quantitative data suggest that either source of phenotypes alone will result in incomplete knowledge of variation for a given taxon. As a broader community develops to use and expand databases characterizing organismal trait diversity, it is important to recognize the limitations of the data sources and develop strategies to more fully characterize variation both within species and across the tree of life. PMID:27191170

  16. Negotiating identity and self-image: perceptions of falls in ambulatory individuals with spinal cord injury - a qualitative study.

    PubMed

    Jørgensen, Vivien; Roaldsen, Kirsti Skavberg

    2017-04-01

    Explore and describe experiences and perceptions of falls, risk of falling, and fall-related consequences in individuals with incomplete spinal cord injury (SCI) who are still walking. A qualitative interview study applying interpretive content analysis with an inductive approach. Specialized rehabilitation hospital. A purposeful sample of 15 individuals (10 men), 23 to 78 years old, 2-34 years post injury with chronic incomplete traumatic SCI, and walking ⩾75% of time for mobility needs. Individual, semi-structured face-to-face interviews were recorded, condensed, and coded to find themes and subthemes. One overarching theme was revealed: "Falling challenges identity and self-image as normal" which comprised two main themes "Walking with incomplete SCI involves minimizing fall risk and fall-related concerns without compromising identity as normal" and "Walking with incomplete SCI implies willingness to increase fall risk in order to maintain identity as normal". Informants were aware of their increased fall risk and took precautions, but willingly exposed themselves to risky situations when important to self-identity. All informants expressed some conditional fall-related concerns, and a few experienced concerns limiting activity and participation. Ambulatory individuals with incomplete SCI considered falls to be a part of life. However, falls interfered with the informants' identities and self-images as normal, healthy, and well-functioning. A few expressed dysfunctional concerns about falling, and interventions should target these.

  17. Compilation of Disruptions to Airports by Volcanic Activity (Version 1.0, 1944-2006)

    USGS Publications Warehouse

    Guffanti, Marianne; Mayberry, Gari C.; Casadevall, Thomas J.; Wunderman, Richard

    2008-01-01

    Volcanic activity has caused significant hazards to numerous airports worldwide, with local to far-ranging effects on travelers and commerce. To more fully characterize the nature and scope of volcanic hazards to airports, we collected data on incidents of airports throughout the world that have been affected by volcanic activity, beginning in 1944 with the first documented instance of damage to modern aircraft and facilities in Naples, Italy, and extending through 2006. Information was gleaned from various sources, including news outlets, volcanological reports (particularly the Smithsonian Institution's Bulletin of the Global Volcanism Network), and previous publications on the topic. This report presents the full compilation of the data collected. For each incident, information about the affected airport and the volcanic source has been compiled as a record in a Microsoft Access database. The database is incomplete in so far as incidents may not have not been reported or documented, but it does present a good sample from diverse parts of the world. Not included are en-route diversions to avoid airborne ash clouds at cruise altitudes. The database has been converted to a Microsoft Excel spreadsheet. To make the PDF version of table 1 in this open-file report resemble the spreadsheet, order the PDF pages as 12, 17, 22; 13, 18, 23; 14, 19, 24; 15, 20, 25; and 16, 21, 26. Analysis of the database reveals that, at a minimum, 101 airports in 28 countries were impacted on 171 occasions from 1944 through 2006 by eruptions at 46 volcanoes. The number of affected airports (101) probably is better constrained than the number of incidents (171) because recurring disruptions at a given airport may have been lumped together or not reported by news agencies, whereas the initial disruption likely is noticed and reported and thus the airport correctly counted.

  18. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.

  19. Indigenous species barcode database improves the identification of zooplankton

    PubMed Central

    Yang, Jianghua; Zhang, Wanwan; Sun, Jingying; Xie, Yuwei; Zhang, Yimin; Burton, G. Allen; Yu, Hongxia

    2017-01-01

    Incompleteness and inaccuracy of DNA barcode databases is considered an important hindrance to the use of metabarcoding in biodiversity analysis of zooplankton at the species-level. Species barcoding by Sanger sequencing is inefficient for organisms with small body sizes, such as zooplankton. Here mitochondrial cytochrome c oxidase I (COI) fragment barcodes from 910 freshwater zooplankton specimens (87 morphospecies) were recovered by a high-throughput sequencing platform, Ion Torrent PGM. Intraspecific divergence of most zooplanktons was < 5%, except Branchionus leydign (Rotifer, 14.3%), Trichocerca elongate (Rotifer, 11.5%), Lecane bulla (Rotifer, 15.9%), Synchaeta oblonga (Rotifer, 5.95%) and Schmackeria forbesi (Copepod, 6.5%). Metabarcoding data of 28 environmental samples from Lake Tai were annotated by both an indigenous database and NCBI Genbank database. The indigenous database improved the taxonomic assignment of metabarcoding of zooplankton. Most zooplankton (81%) with barcode sequences in the indigenous database were identified by metabarcoding monitoring. Furthermore, the frequency and distribution of zooplankton were also consistent between metabarcoding and morphology identification. Overall, the indigenous database improved the taxonomic assignment of zooplankton. PMID:28977035

  20. Security and matching of partial fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.

    2004-08-01

    Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).

  1. Long-term results after treatment of extensive odontogenic cysts of the jaws: a review.

    PubMed

    Wakolbinger, Robert; Beck-Mannagetta, Johann

    2016-01-01

    The aim was to perform a literature review concerning long-term results after treatment of extensive cysts of the jaws. The following databases were searched: MEDLINE, Cochrane CENTRAL, Cochrane Database of Systematic Reviews, The Cochrane Library, and EMBASE. Case reports and abstracts were excluded. Three hundred fifty-six articles were found, of which 30 were included. Only the minority of the studies reported long-term results. Most authors did not distinguish between temporary complications and permanent deficiencies (incomplete bone healing, permanent sensory deficits). Based on this review, it is recommended to consider primary decompression or marsupialization ± later definitive surgery for the treatment of extensive jaw cysts in order to achieve satisfying clinical results. Complications (occurring within the first 6 months postoperatively, e.g., infection) and remaining deficits (after a minimum of 6 months and not changing over time, e.g., bony or sensory deficit) should be clearly separated from each other. Knowledge of permanent deficits and bone healing after different therapeutic approaches is important for decision making. Patients should be informed not only about complications but also about the risk of permanent deficits for each method.

  2. Using Proxy Records to Document Gulf of Mexico Tropical Cyclones from 1820-1915

    PubMed Central

    Rohli, Robert V.; DeLong, Kristine L.; Harley, Grant L.; Trepanier, Jill C.

    2016-01-01

    Observations of pre-1950 tropical cyclones are sparse due to observational limitations; therefore, the hurricane database HURDAT2 (1851–present) maintained by the National Oceanic and Atmospheric Administration may be incomplete. Here we provide additional documentation for HURDAT2 from historical United States Army fort records (1820–1915) and other archived documents for 28 landfalling tropical cyclones, 20 of which are included in HURDAT2, along the northern Gulf of Mexico coast. One event that occurred in May 1863 is not currently documented in the HURDAT2 database but has been noted in other studies. We identify seven tropical cyclones that occurred before 1851, three of which are potential tropical cyclones. We corroborate the pre-HURDAT2 storms with a tree-ring reconstruction of hurricane impacts from the Florida Keys (1707–2009). Using this information, we suggest landfall locations for the July 1822 hurricane just west of Mobile, Alabama and 1831 hurricane near Last Island, Louisiana on 18 August. Furthermore, we model the probable track of the August 1831 hurricane using the weighted average distance grid method that incorporates historical tropical cyclone tracks to supplement report locations. PMID:27898726

  3. Review of the Methods to Obtain Paediatric Drug Safety Information: Spontaneous Reporting and Healthcare Databases, Active Surveillance Programmes, Systematic Reviews and Meta-analyses

    PubMed

    Gentili, Marta; Pozzi, Marco; Peeters, Gabrielle; Radice, Sonia; Carnovale, Carla

    2018-02-06

    Knowledge of drugs safety collected during the pre-marketing phase is inevitably limited because the randomized clinical trials (RCTs) are rarely designed to evaluate safety. The small and selective groups of enrolled individuals and the limited duration of trials may hamper the ability to characterize fully the safety profiles of drugs. Additionally, information about rare adverse drug reactions (ADRs) in special groups is often incomplete or not available for most of the drugs commonly used in the daily clinical practice. In the paediatric setting several highimpact safety issues have emerged. Hence, in recent years, there has been a call for improved post-marketing pharmacoepidemiological studies, in which cohorts of patients are monitored for sufficient time in order to determine the precise risk-benefit ratio. In this review, we discuss the current available strategies enhancing the post-marketing monitoring activities of the drugs in the paediatric setting and define criteria whereby they can provide valuable information to improve the management of therapy in daily clinical practice including both safety and efficacy aspects. The strategies we cover include the signal detection using international pharmacovigilance and/or healthcare databases, the promotion of active surveillance initiatives which can generate complete, informative data sets for the signal detection and systematic review/meta-analysis. Together, these methods provide a comprehensive picture of causality and risk improving the management of therapy in a paediatric setting and they should be considered as a unique tool to be integrated with post-marketing activities. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. 16 CFR 1061.11 - Incomplete or insufficient applications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Incomplete or insufficient applications. 1061.11 Section 1061.11 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL APPLICATIONS... staff believes that additional information is necessary or useful for a proper evaluation of the...

  5. Use of Remote Sensing Data to Enhance NWS Storm Damage Toolkit

    NASA Technical Reports Server (NTRS)

    Jedlove, Gary J.; Molthan, Andrew L.; White, Kris; Burks, Jason; Stellman, Keith; Smith, Mathew

    2012-01-01

    In the wake of a natural disaster such as a tornado, the National Weather Service (NWS) is required to provide a very detailed and timely storm damage assessment to local, state and federal homeland security officials. The Post ]Storm Data Acquisition (PSDA) procedure involves the acquisition and assembly of highly perishable data necessary for accurate post ]event analysis and potential integration into a geographic information system (GIS) available to its end users and associated decision makers. Information gained from the process also enables the NWS to increase its knowledge of extreme events, learn how to better use existing equipment, improve NWS warning programs, and provide accurate storm intensity and damage information to the news media and academia. To help collect and manage all of this information, forecasters in NWS Southern Region are currently developing a Storm Damage Assessment Toolkit (SDAT), which incorporates GIS ]capable phones and laptops into the PSDA process by tagging damage photography, location, and storm damage details with GPS coordinates for aggregation within the GIS database. However, this tool alone does not fully integrate radar and ground based storm damage reports nor does it help to identify undetected storm damage regions. In many cases, information on storm damage location (beginning and ending points, swath width, etc.) from ground surveys is incomplete or difficult to obtain. Geographic factors (terrain and limited roads in rural areas), manpower limitations, and other logistical constraints often prevent the gathering of a comprehensive picture of tornado or hail damage, and may allow damage regions to go undetected. Molthan et al. (2011) have shown that high resolution satellite data can provide additional valuable information on storm damage tracks to augment this database. This paper presents initial development to integrate satellitederived damage track information into the SDAT for near real ]time use by forecasters and decision makers.

  6. Use of Remote Sensing Data to Enhance NWS Storm Damage Toolkit

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Molthan, A.; White, K.; Burks, J.; Stellman, K.; Smith, M. R.

    2012-12-01

    In the wake of a natural disaster such as a tornado, the National Weather Service (NWS) is required to provide a very detailed and timely storm damage assessment to local, state and federal homeland security officials. The Post-Storm Data Acquisition (PSDA) procedure involves the acquisition and assembly of highly perishable data necessary for accurate post-event analysis and potential integration into a geographic information system (GIS) available to its end users and associated decision makers. Information gained from the process also enables the NWS to increase its knowledge of extreme events, learn how to better use existing equipment, improve NWS warning programs, and provide accurate storm intensity and damage information to the news media and academia. To help collect and manage all of this information, forecasters in NWS Southern Region are currently developing a Storm Damage Assessment Toolkit (SDAT), which incorporates GIS-capable phones and laptops into the PSDA process by tagging damage photography, location, and storm damage details with GPS coordinates for aggregation within the GIS database. However, this tool alone does not fully integrate radar and ground based storm damage reports nor does it help to identify undetected storm damage regions. In many cases, information on storm damage location (beginning and ending points, swath width, etc.) from ground surveys is incomplete or difficult to obtain. Geographic factors (terrain and limited roads in rural areas), manpower limitations, and other logistical constraints often prevent the gathering of a comprehensive picture of tornado or hail damage, and may allow damage regions to go undetected. Molthan et al. (2011) have shown that high resolution satellite data can provide additional valuable information on storm damage tracks to augment this database. This paper presents initial development to integrate satellite-derived damage track information into the SDAT for near real-time use by forecasters and decision makers.

  7. Information Fusion of Conflicting Input Data.

    PubMed

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-10-29

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.

  8. Information Fusion of Conflicting Input Data

    PubMed Central

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-01-01

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874

  9. The Restricted Isometry Property for Time-Frequency Structured Random Matrices

    DTIC Science & Technology

    2011-06-16

    tests illustrating the use of Ψg for compressive sensing are presented in [41]. They illustrate that empirically Ψg performs very similarly to a...E.J., J., Tao, T., Romberg , J.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans...Inform. Theory 52(2), 489–509 (2006) [12] Candès, E.J., Romberg , J., Tao, T.: Stable signal recovery from incomplete and inaccurate mea- surements. Comm

  10. Search for Expectancy-Inconsistent Information Reduces Uncertainty Better: The Role of Cognitive Capacity

    PubMed Central

    Strojny, Paweł; Kossowska, Małgorzata; Strojny, Agnieszka

    2016-01-01

    Motivation and cognitive capacity are key factors in people’s everyday struggle with uncertainty. However, the exact nature of their interplay in various contexts still needs to be revealed. The presented paper reports on two experimental studies which aimed to examine the joint consequences of motivational and cognitive factors for preferences regarding incomplete information expansion. In Study 1 we demonstrate the interactional effect of motivation and cognitive capacity on information preference. High need for closure resulted in a stronger relative preference for expectancy-inconsistent information among non-depleted individuals, but the opposite among cognitively depleted ones. This effect was explained by the different informative value of questions in comparison to affirmative sentences and the potential possibility of assimilation of new information if it contradicts prior knowledge. In Study 2 we further investigated the obtained effect, showing that not only questions but also other kinds of incomplete information are subject to the same dependency. Our results support the expectation that, in face of incomplete information, motivation toward closure may be fulfilled efficiently by focusing on expectancy-inconsistent pieces of data. We discuss the obtained effect in the context of previous assumptions that high need for closure results in a simple processing style, advocating a more complex approach based on the character of the provided information. PMID:27047422

  11. Analysis of the accuracy and readability of herbal supplement information on Wikipedia.

    PubMed

    Phillips, Jennifer; Lam, Connie; Palmisano, Lisa

    2014-01-01

    To determine the completeness and readability of information found in Wikipedia for leading dietary supplements and assess the accuracy of this information with regard to safety (including use during pregnancy/lactation), contraindications, drug interactions, therapeutic uses, and dosing. Cross-sectional analysis of Wikipedia articles. The contents of Wikipedia articles for the 19 top-selling herbal supplements were retrieved on July 24, 2012, and evaluated for organization, content, accuracy (as compared with information in two leading dietary supplement references) and readability. Accuracy of Wikipedia articles. No consistency was noted in how much information was included in each Wikipedia article, how the information was organized, what major categories were used, and where safety and therapeutic information was located in the article. All articles in Wikipedia contained information on therapeutic uses and adverse effects but several lacked information on drug interactions, pregnancy, and contraindications. Wikipedia articles had 26%-75% of therapeutic uses and 76%-100% of adverse effects listed in the Natural Medicines Comprehensive Database and/or Natural Standard. Overall, articles were written at a 13.5-grade level, and all were at a ninth-grade level or above. Articles in Wikipedia in mid-2012 for the 19 top-selling herbal supplements were frequently incomplete, of variable quality, and sometimes inconsistent with reputable sources of information on these products. Safety information was particularly inconsistent among the articles. Patients and health professionals should not rely solely on Wikipedia for information on these herbal supplements when treatment decisions are being made.

  12. A Study of Incomplete Abortion Following Medical Method of Abortion (MMA).

    PubMed

    Pawde, Anuya A; Ambadkar, Arun; Chauhan, Anahita R

    2016-08-01

    Medical method of abortion (MMA) is a safe, efficient, and affordable method of abortion. However, incomplete abortion is a known side effect. To study incomplete abortion due to medication abortion and compare to spontaneous incomplete abortion and to study referral practices and prescriptions in cases of incomplete abortion following MMA. Prospective observational study of 100 women with first trimester incomplete abortion, divided into two groups (spontaneous or following MMA), was administered a questionnaire which included information regarding onset of bleeding, treatment received, use of medications for abortion, its prescription, and administration. Comparison of two groups was done using Fisher exact test (SPSS 21.0 software). Thirty percent of incomplete abortions were seen following MMA; possible reasons being self-administration or prescription by unregistered practitioners, lack of examination, incorrect dosage and drugs, and lack of follow-up. Complications such as collapse, blood requirement, and fever were significantly higher in these patients compared to spontaneous abortion group. The side effects of incomplete abortions following MMA can be avoided by the following standard guidelines. Self medication, over- the-counter use, and prescription by unregistered doctors should be discouraged and reported, and need of follow-up should be emphasized.

  13. When enough is not enough: Information overload and metacognitive decisions to stop studying information.

    PubMed

    Murayama, Kou; Blake, Adam B; Kerr, Tyson; Castel, Alan D

    2016-06-01

    People are often exposed to more information than they can actually remember. Despite this frequent form of information overload, little is known about how much information people choose to remember. Using a novel "stop" paradigm, the current research examined whether and how people choose to stop receiving new-possibly overwhelming-information with the intent to maximize memory performance. Participants were presented with a long list of items and were rewarded for the number of correctly remembered words in a following free recall test. Critically, participants in a stop condition were provided with the option to stop the presentation of the remaining words at any time during the list, whereas participants in a control condition were presented with all items. Across 5 experiments, the authors found that participants tended to stop the presentation of the items to maximize the number of recalled items, but this decision ironically led to decreased memory performance relative to the control group. This pattern was consistent even after controlling for possible confounding factors (e.g., task demands). The results indicated a general, false belief that we can remember a larger number of items if we restrict the quantity of learning materials. These findings suggest people have an incomplete understanding of how we remember excessive amounts of information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Nature Disaster Risk Evaluation with a Group Decision Making Method Based on Incomplete Hesitant Fuzzy Linguistic Preference Relations.

    PubMed

    Tang, Ming; Liao, Huchang; Li, Zongmin; Xu, Zeshui

    2018-04-13

    Because the natural disaster system is a very comprehensive and large system, the disaster reduction scheme must rely on risk analysis. Experts' knowledge and experiences play a critical role in disaster risk assessment. The hesitant fuzzy linguistic preference relation is an effective tool to express experts' preference information when comparing pairwise alternatives. Owing to the lack of knowledge or a heavy workload, information may be missed in the hesitant fuzzy linguistic preference relation. Thus, an incomplete hesitant fuzzy linguistic preference relation is constructed. In this paper, we firstly discuss some properties of the additive consistent hesitant fuzzy linguistic preference relation. Next, the incomplete hesitant fuzzy linguistic preference relation, the normalized hesitant fuzzy linguistic preference relation, and the acceptable hesitant fuzzy linguistic preference relation are defined. Afterwards, three procedures to estimate the missing information are proposed. The first one deals with the situation in which there are only n-1 known judgments involving all the alternatives; the second one is used to estimate the missing information of the hesitant fuzzy linguistic preference relation with more known judgments; while the third procedure is used to deal with ignorance situations in which there is at least one alternative with totally missing information. Furthermore, an algorithm for group decision making with incomplete hesitant fuzzy linguistic preference relations is given. Finally, we illustrate our model with a case study about flood disaster risk evaluation. A comparative analysis is presented to testify the advantage of our method.

  15. Issues in Designing Tutors for Games of Incomplete Information: a Bridge Case Study

    NASA Astrophysics Data System (ADS)

    Kemp, Ray; McKenzie, Ben; Kemp, Elizabeth

    There are a number of commercial packages for playing the game of bridge, and even more papers on possible techniques for improving the quality of such systems. We examine some of the AI techniques that have proved successful for implementing bridge playing systems and discuss how they might be adapted for teaching the game. We pay particular attention to the issue of incomplete information and include some of our own research into the subject.

  16. Systematic reviews need systematic searchers

    PubMed Central

    McGowan, Jessie; Sampson, Margaret

    2005-01-01

    Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278

  17. Diagnosis of systemic-onset juvenile idiopathic arthritis after treatment for presumed Kawasaki disease.

    PubMed

    Dong, Siwen; Bout-Tabaku, Sharon; Texter, Karen; Jaggi, Preeti

    2015-05-01

    To estimate the incidence of systemic-onset juvenile idiopathic arthritis (SoJIA) within 6 months after treatment for presumed Kawasaki disease (KD) (presumed patients with KD with subsequent diagnosis of SoJIA [pKD/SoJIA]) and describe presentation differences from sole KD. We identified patients treated for KD at Nationwide Children's Hospital and from the Pediatric Health Information System from 2009-2013. We then identified the subset of children, pKD/SoJIA, who received an International Classification of Diseases, Ninth Revision code for SoJIA and had it listed at least once 3 months after and within 6 months after KD diagnosis. Demographic characteristics, readmission rates, treatments, and complications were noted. A literature review was also performed to identify clinical, laboratory, and echocardiographic data of previously documented patients with KD later diagnosed with SoJIA. There were 6745 total treated patients with KD in the Pediatric Health Information System database during the study period; 10 patients were identified to have pKD/SoJIA (0.2% of cohort). Those with pKD/SoJIA were predominantly Caucasian compared with patients with KD (90% and 46.8%, respectively; P=.003). Macrophage activation syndrome was more common in patients with pKD/SoJIA than in sole patients with KD (30% and 0.30%, respectively; P<.001). Fifteen cases of pKD/SoJIA were identified by literature and chart review, 12 of whom were initially diagnosed with incomplete KD. We reported a 0.2% incidence of pKD/SoJIA, which was associated with Caucasian race, macrophage activation syndrome, and an incomplete KD phenotype. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Death certificate completion skills of hospital physicians in a developing country.

    PubMed

    Haque, Ahmed Suleman; Shamim, Kanza; Siddiqui, Najm Hasan; Irfan, Muhammad; Khan, Javaid Ahmed

    2013-06-06

    Death certificates (DC) can provide valuable health status data regarding disease incidence, prevalence and mortality in a community. It can guide local health policy and help in setting priorities. Incomplete and inaccurate DC data, on the other hand, can significantly impair the precision of a national health information database. In this study we evaluated the accuracy of death certificates at a tertiary care teaching hospital in a Karachi, Pakistan. A retrospective study conducted at Aga Khan University Hospital, Karachi, Pakistan for a period of six months. Medical records and death certificates of all patients who died under adult medical service were studied. The demographic characteristics, administrative details, co-morbidities and cause of death from death certificates were collected using an approved standardized form. Accuracy of this information was validated using their medical records. Errors in the death certificates were classified into six categories, from 0 to 5 according to increasing severity; a grade 0 was assigned if no errors were identified, and 5, if an incorrect cause of death was attributed or placed in an improper sequence. 223 deaths occurred during the study period. 9 certificates were not accessible and 12 patients had incomplete medical records. 202 certificates were finally analyzed. Most frequent errors pertaining to patients' demographics (92%) and cause/s of death (87%) were identified. 156 (77%) certificates had 3 or more errors and 124 (62%) certificates had a combination of errors that significantly changed the death certificate interpretation. Only 1% certificates were error free. A very high rate of errors was identified in death certificates completed at our academic institution. There is a pressing need for appropriate intervention/s to resolve this important issue.

  19. Diagnosis support system based on clinical guidelines: comparison between case-based fuzzy cognitive maps and Bayesian networks.

    PubMed

    Douali, Nassim; Csaba, Huszka; De Roo, Jos; Papageorgiou, Elpiniki I; Jaulent, Marie-Christine

    2014-01-01

    Several studies have described the prevalence and severity of diagnostic errors. Diagnostic errors can arise from cognitive, training, educational and other issues. Examples of cognitive issues include flawed reasoning, incomplete knowledge, faulty information gathering or interpretation, and inappropriate use of decision-making heuristics. We describe a new approach, case-based fuzzy cognitive maps, for medical diagnosis and evaluate it by comparison with Bayesian belief networks. We created a semantic web framework that supports the two reasoning methods. We used database of 174 anonymous patients from several European hospitals: 80 of the patients were female and 94 male with an average age 45±16 (average±stdev). Thirty of the 80 female patients were pregnant. For each patient, signs/symptoms/observables/age/sex were taken into account by the system. We used a statistical approach to compare the two methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. The international phosphate resource data base; development and maintenance

    USGS Publications Warehouse

    Bridges, Nancy J.

    1983-01-01

    The IPRDB (International Phosphate Resource Data Base) was developed to provide a single computerized source of geologic information about phosphate deposits worldwide. It is expected that this data base will encourage more thorough scientific analyses of phosphate deposits and assessments of undiscovered phosphate resources, and that methods of data collection and storage will be streamlined. Because the database was intended to serve as a repository for diverse and detailed data, a large amount of the early research effort was devoted to the design and development of the system. To date (1982), the file remains incomplete. All development work and file maintenance work on IPRDB was suspended as of October 1, 1982; this paper is intended to document the steps taken up to that date. The computer programs listed in the appendices were written specifically for the IPRDB phosbib file and are of limited future use.

  1. Modeling loosely annotated images using both given and imagined annotations

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Boujemaa, Nozha; Chen, Yunhao; Deng, Lei

    2011-12-01

    In this paper, we present an approach to learn latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: 1. ambiguous correspondences between visual features and annotated keywords; 2. incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning a topic model. In particular, some ``imagined'' keywords are poured into the incomplete annotation through measuring similarity between keywords in terms of their co-occurrence. Then, both given and imagined annotations are employed to learn probabilistic topic models for automatically annotating new images. We conduct experiments on two image databases (i.e., Corel and ESP) coupled with their loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods. The proposed method improves word-driven probability latent semantic analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.

  2. FDA-approved drugs that are spermatotoxic in animals and the utility of animal testing for human risk prediction.

    PubMed

    Rayburn, Elizabeth R; Gao, Liang; Ding, Jiayi; Ding, Hongxia; Shao, Jun; Li, Haibo

    2018-02-01

    This study reviews FDA-approved drugs that negatively impact spermatozoa in animals, as well as how these findings reflect on observations in human male gametes. The FDA drug warning labels included in the DailyMed database and the peer-reviewed literature in the PubMed database were searched for information to identify single-ingredient, FDA-approved prescription drugs with spermatotoxic effects. A total of 235 unique, single-ingredient, FDA-approved drugs reported to be spermatotoxic in animals were identified in the drug labels. Forty-nine of these had documented negative effects on humans in either the drug label or literature, while 31 had no effect or a positive impact on human sperm. For the other 155 drugs that were spermatotoxic in animals, no human data was available. The current animal models are not very effective for predicting human spermatotoxicity, and there is limited information available about the impact of many drugs on human spermatozoa. New approaches should be designed that more accurately reflect the findings in men, including more studies on human sperm in vitro and studies using other systems (ex vivo tissue culture, xenograft models, in silico studies, etc.). In addition, the present data is often incomplete or reported in a manner that prevents interpretation of their clinical relevance. Changes should be made to the requirements for pre-clinical testing, drug surveillance, and the warning labels of drugs to ensure that the potential risks to human fertility are clearly indicated.

  3. Evaluating the quality of Marfan genotype-phenotype correlations in existing FBN1 databases.

    PubMed

    Groth, Kristian A; Von Kodolitsch, Yskert; Kutsche, Kerstin; Gaustadnes, Mette; Thorsen, Kasper; Andersen, Niels H; Gravholt, Claus H

    2017-07-01

    Genetic FBN1 testing is pivotal for confirming the clinical diagnosis of Marfan syndrome. In an effort to evaluate variant causality, FBN1 databases are often used. We evaluated the current databases regarding FBN1 variants and validated associated phenotype records with a new Marfan syndrome geno-phenotyping tool called the Marfan score. We evaluated four databases (UMD-FBN1, ClinVar, the Human Gene Mutation Database (HGMD), and Uniprot) containing 2,250 FBN1 variants supported by 4,904 records presented in 307 references. The Marfan score calculated for phenotype data from the records quantified variant associations with Marfan syndrome phenotype. We calculated a Marfan score for 1,283 variants, of which we confirmed the database diagnosis of Marfan syndrome in 77.1%. This represented only 35.8% of the total registered variants; 18.5-33.3% (UMD-FBN1 versus HGMD) of variants associated with Marfan syndrome in the databases could not be confirmed by the recorded phenotype. FBN1 databases can be imprecise and incomplete. Data should be used with caution when evaluating FBN1 variants. At present, the UMD-FBN1 database seems to be the biggest and best curated; therefore, it is the most comprehensive database. However, the need for better genotype-phenotype curated databases is evident, and we hereby present such a database.Genet Med advance online publication 01 December 2016.

  4. Rough Set Approach to Incomplete Multiscale Information System

    PubMed Central

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  5. Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.

    We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.

  6. Patient-centered consumer health social network websites: a pilot study of quality of user-generated health information.

    PubMed

    Tsai, Christopher C; Tsai, Sarai H; Zeng-Treitler, Qing; Liang, Bryan A

    2007-10-11

    The quality of user-generated health information on consumer health social networking websites has not been studied. We collected a set of postings related to Diabetes Mellitus Type I from three such sites and classified them based on accuracy, error type, and clinical significance of error. We found 48% of postings contained medical content, and 54% of these were either incomplete or contained errors. About 85% of the incomplete and erroneous messages were potentially clinically significant.

  7. State of reporting of primary biomedical research: a scoping review protocol

    PubMed Central

    Mbuagbaw, Lawrence; Samaan, Zainab; Jin, Yanling; Nwosu, Ikunna; Levine, Mitchell A H; Adachi, Jonathan D; Thabane, Lehana

    2017-01-01

    Introduction Incomplete or inconsistent reporting remains a major concern in the biomedical literature. Incomplete or inconsistent reporting may yield the published findings unreliable, irreproducible or sometimes misleading. In this study based on evidence from systematic reviews and surveys that have evaluated the reporting issues in primary biomedical studies, we aim to conduct a scoping review with focuses on (1) the state-of-the-art extent of adherence to the emerging reporting guidelines in primary biomedical research, (2) the inconsistency between protocols or registrations and full reports and (3) the disagreement between abstracts and full-text articles. Methods and analyses We will use a comprehensive search strategy to retrieve all available and eligible systematic reviews and surveys in the literature. We will search the following electronic databases: Web of Science, Excerpta Medica Database (EMBASE), MEDLINE and Cumulative Index to Nursing and Allied Health Literature (CINAHL). Our outcomes are levels of adherence to reporting guidelines, levels of consistency between protocols or registrations and full reports and the agreement between abstracts and full reports, all of which will be expressed as percentages, quality scores or categorised rating (such as high, medium and low). No pooled analyses will be performed quantitatively given the heterogeneity of the included systematic reviews and surveys. Likewise, factors associated with improved completeness and consistency of reporting will be summarised qualitatively. The quality of the included systematic reviews will be evaluated using AMSTAR (a measurement tool to assess systematic reviews). Ethics and dissemination All findings will be published in peer-reviewed journals and relevant conferences. These results may advance our understanding of the extent of incomplete and inconsistent reporting, factors related to improved completeness and consistency of reporting and potential recommendations for various stakeholders in the biomedical community. PMID:28360252

  8. Seven Near-Earth Asteroids at Asteroids Observers (OBAS) - MMPD: 2017 Jan-May

    NASA Astrophysics Data System (ADS)

    Fornas, Gonzalo; Carreño, Alfonso; Arce, Enrique; Flores, Angel; Mas, Vincente; Rodrigo, Onofre; Brines, Pedro; Fornas, Alvaro; Herrero, David; Lozano, Juan

    2018-01-01

    We report on the photometric analysis result of seven near-Earth asteroids (NEA) by Asteroides Observers (OBAS). This work is part of the Minor Planet Photometric Database effort that was initiated by a group of Spanish amateur astronomers. We have managed to obtain a number of accurate and complete lightcurves as well as some additional incomplete lightcurves to help analysis at future oppositions.

  9. Gap analysis of the European Earth Observation Networks

    NASA Astrophysics Data System (ADS)

    Closa, Guillem; Serral, Ivette; Maso, Joan

    2016-04-01

    Earth Observations (EO) are fundamental to enhance the scientific understanding of the current status of the Earth. Nowadays, there are a lot of EO services that provide large volume of data, and the number of datasets available for different geosciences areas is increasing by the day. Despite this coverage, a glance of the European EO networks reveals that there are still some issues that are not being met; some gaps in specific themes or some thematic overlaps between different networks. This situation requires a clarification process of the actual status of the EO European networks in order to set priorities and propose future actions that will improve the European EO networks. The aim of this work is to detect the existing gaps and overlapping problems among the European EO networks. The analytical process has been done by studying the availability and the completeness of the Essential Variables (EV) data captured by the European EO networks. The concept of EVs considers that there are a number of parameters that are essential to characterize the state and trends of a system without losing significant information. This work generated a database of the existing gaps in the European EO network based on the initial GAIA-CLIM project data structure. For each theme the missing or incomplete data about each EV was indentified. Then, if incomplete, the gap was described by adding its type (geographical extent, vertical extent, temporal extent, spatial resolution, etc), the cost, the remedy, the feasibility, the impact and the priority, among others. Gaps in EO are identified following the ConnectinGEO methodology structured in 5 threads; identification of observation requirements, incorporation of international research programs material, consultation process within the current EO actors, GEOSS Discovery and Access Broker analysis, and industry-driven challenges implementation. Concretely, the presented work focuses on the second thread, which is based on International research programs screening, conclusions of research papers extraction, research in collective roadmaps that contain valuable information about problems due to lack of data, and EU research calls considering to move forward in known uncovered areas. This provides a set of results that will be later validated by an iterative process that will enhance the database content until an agreement in the community is reached and a list of priorities is ready to be delivered. This work is done thanks to the EU ConnectinGEO H2020 (Project Nr: 641538).

  10. International Adoptees as Teens and Young Adults: Family and Child Function

    ERIC Educational Resources Information Center

    Matthews, Jessica A. K.; Tirella, Linda G.; Germann, Emma S.; Miller, Laurie C.

    2016-01-01

    Many of the >339,000 international adoptees arriving in the USA during the last 25 years are now teenagers and young adults (YA). Information about their long-term social integration, school performance, and self-esteem is incomplete. Moreover, the relation of these outcomes to facets of family function is incompletely understood. We…

  11. Methods for Estimating Annual Wastewater Nutrient Loads in the Southeastern United States

    USGS Publications Warehouse

    McMahon, Gerard; Tervelt, Larinda; Donehoo, William

    2007-01-01

    This report describes an approach for estimating annual total nitrogen and total phosphorus loads from point-source dischargers in the southeastern United States. Nutrient load estimates for 2002 were used in the calibration and application of a regional nutrient model, referred to as the SPARROW (SPAtially Referenced Regression On Watershed attributes) watershed model. Loads from dischargers permitted under the National Pollutant Discharge Elimination System were calculated using data from the U.S. Environmental Protection Agency Permit Compliance System database and individual state databases. Site information from both state and U.S. Environmental Protection Agency databases, including latitude and longitude and monitored effluent data, was compiled into a project database. For sites with a complete effluent-monitoring record, effluent-flow and nutrient-concentration data were used to develop estimates of annual point-source nitrogen and phosphorus loads. When flow data were available but nutrient-concentration data were missing or incomplete, typical pollutant-concentration values of total nitrogen and total phosphorus were used to estimate load. In developing typical pollutant-concentration values, the major factors assumed to influence wastewater nutrient-concentration variability were the size of the discharger (the amount of flow), the season during which discharge occurred, and the Standard Industrial Classification code of the discharger. One insight gained from this study is that in order to gain access to flow, concentration, and location data, close communication and collaboration are required with the agencies that collect and manage the data. In addition, the accuracy and usefulness of the load estimates depend on the willingness of the states and the U.S. Environmental Protection Agency to provide guidance and review for at least a subset of the load estimates that may be problematic.

  12. Geochemical databases: minding the pitfalls to avoid the pratfalls

    NASA Astrophysics Data System (ADS)

    Goldstein, S. L.; Hofmann, A. W.

    2011-12-01

    The field of geochemistry has been revolutionized in recent years by the advent of databases (PetDB, GEOROC, NAVDAT, etc). A decade ago, a geochemical synthesis required major time investments in order to compile relatively small amounts of fragmented data from large numbers of publications, Now virtually all of the published data on nearly any solid Earth topic can be downloaded to nearly any desktop computer with a few mouse clicks. Most solid Earth talks at international meetings show data compilations from these databases. Applications of the data are playing an increasingly important role in shaping our thinking about the Earth. They have changed some fundamental ideas about the compositional structure of the Earth (for example, showing that the Earth's "trace element depleted upper mantle" is not so depleted in trace elements). This abundance of riches also poses new risks. Until recently, important details associated with data publication (adequate metadata and quality control information) were given low priority, even in major journals. The online databases preserve whatever has been published, irrespective of quality. "Bad data" arises from many causes, here are a few. Some are associated with sample processing, including incomplete dissolution of refractory trace minerals, or inhomogeneous powders, or contamination of key elements during preparation (for example, this was a problem for lead when gasoline was leaded, and for niobium when tungsten-carbide mills were used to powder samples). Poor analytical quality is a continual problem (for example, when elemental abundances are at near background levels for an analytical method). Errors in published data tables (more common than you think) become bad data in the databases. The accepted values of interlaboratory standards change with time, while the published data based on old values stay the same. Thus the pitfalls associated with the new data accessibility are dangerous in the hands of the inexperienced users (for example, a student of mine took the initiative to write a paper showing very creative insights, based on some neodymium isotope data on oceanic volcanics; unfortunately the uniqueness of the data reflected the normalization procedures used by different labs). Many syntheses assume random sampling even though we know that oversampled regions are over-represented. We will show examples where raw downloads of data from databases without extensive screening can yield data collections where the garbage swamps the useful information. We will also show impressive but meaningless correlations (e.g. upper-mantle temperature versus atmospheric temperature). In order to avoid the pratfalls, screening of database output is necessary. In order to generate better data consistency, new standards for reporting geochemical data are necessary.

  13. On the Accuracy of Language Trees

    PubMed Central

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  14. Improving Global Building Exposure Data for Disaster Forecasting, Mitigation, and Response

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; Huyck, C.; Lewis, G.; Becker, M.; Vinay, S.; Tralli, D.; Eguchi, R.

    2013-12-01

    This paper describes an exploratory study being performed under the NASA Applied Sciences Program where the goal is to integrate Earth science data and information for disaster forecasting, mitigation and response. Specifically, we are delivering EO-derived built environment data and information for use in catastrophe (CAT) models and loss estimation tools. CAT models and loss estimation tools typically use GIS exposure databases to characterize the real-world environment. These datasets are often a source of great uncertainty in the loss estimates, particularly in international events, because the data are incomplete, and sometimes inaccurate and disparate in quality from one region to another. Preliminary research by project team members as part of the Global Earthquake Model (GEM) consortium suggests that a strong relationship exists between the height and volume of built-up areas and NASA data products from the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Moderate Resolution Imaging Spectroradiometer (MODIS), and the NASA Socioeconomic Data and Applications Center (SEDAC). Applying this knowledge within the framework of the GEM Global Exposure Database (GED) is significantly enhancing our ability to quantify building exposure, particularly in developing countries and emerging insurance markets. Global insurance products that have a more comprehensive basis for assessing risk and exposure - as from EO-derived data and information assimilated into CAT models and loss estimation tools - will help a) help to transform the way in which we measure, monitor and assess the vulnerability of our communities globally, and in turn, b) help encourage the investments needed - especially in the developing world - stimulating economic growth and actions that would lead to a more disaster-resilient world. Improved building exposure data will also be valuable for near-real time applications such as emergency response planning and post-disaster damage and needs assessment.

  15. Reporting rates of yellow fever vaccine 17D or 17DD-associated serious adverse events in pharmacovigilance data bases: systematic review.

    PubMed

    Thomas, Roger E; Lorenzetti, Diane L; Spragins, Wendy; Jackson, Dave; Williamson, Tyler

    2011-07-01

    To assess the reporting rates of serious adverse events attributable to yellow fever vaccination with 17D and 17DD strains as reported in pharmacovigilance databases, and assess reasons for differences in reporting rates. We searched 9 electronic databases for peer reviewed and grey literature (government reports, conferences), in all languages. Reference lists of key studies were also reviewed to identify additional studies. We identified 2,415 abstracts, of which 472 were selected for full text review. We identified 15 pharmacovigilance databases which reported adverse events attributed to yellow fever vaccination, of which 10 contributed data to this review with about 107,600,000 patients (allowing for overlapping time periods for the studies of the US VAERS database), and the data are very heavily weighted (94%) by the Brazilian database. The estimates of serious adverse events form three groups. The estimates for Australia were low at 0/210,656 for "severe neurological disease" and 1/210,656 for YEL-AVD, and also low for Brazil with 9 hypersensitivity events, 0.23 anaphylactic shock events, 0.84 neurologic syndrome events and 0.19 viscerotropic events cases/million doses. The five analyses of partly overlapping periods for the US VAERS database provide an estimate of 3.6/cases per million YEL-AND in one analysis and 7.8 in another, and 3.1 YEL-AVD in one analysis and 3.9 in another. The estimates for the UK used only the inclusive term of "serious adverse events" not further classified into YEL-And or YEL-AND and reported 34 "serious adverse events." The Swiss database used the term "serious adverse events" and reported 7 such events (including 4 "neurologic reactions") for a reporting rate of 25 "serious adverse events"/million doses. Reporting rates for serious adverse events following yellow fever vaccination are low. Differences in reporting rates may be due to differences in definitions, surveillance system organisation, methods of reporting cases, administration of YFV with other vaccines, incomplete information about denominators, time intervals for reporting events, the degree of passive reporting, access to diagnostic resources, and differences in time periods of reporting.

  16. Orientation and mobility training for partially-sighted older adults using an identification cane: a systematic review

    PubMed Central

    Ballemans, Judith; Kempen, Gertrudis IJM; Zijlstra, GA Rixt

    2011-01-01

    Objective: This study aimed to provide an overview of the development, content, feasibility, and effectiveness of existing orientation and mobility training programmes in the use of the identification cane. Data sources: A systematic bibliographic database search in PubMed, PsychInfo, ERIC, CINAHL and the Cochrane Library was performed, in combination with the expert consultation (n = 42; orientation and mobility experts), and hand-searching of reference lists. Review methods: Selection criteria included a description of the development, the content, the feasibility, or the effectiveness of orientation and mobility training in the use of the identification cane. Two reviewers independently agreed on eligibility and methodological quality. A narrative/qualitative data analysis method was applied to extract data from obtained documents. Results: The sensitive database search and hand-searching of reference lists revealed 248 potentially relevant abstracts. None met the eligibility criteria. Expert consultation resulted in the inclusion of six documents in which the information presented on the orientation and mobility training in the use of the identification cane was incomplete and of low methodological quality. Conclusion: Our review of the literature showed a lack of well-described protocols and studies on orientation and mobility training in identification cane use. PMID:21795405

  17. Borderline tumors of the ovary: A clinicopathological study

    PubMed Central

    Yasmeen, Samia; Hannan, Abdul; Sheikh, Fareeha; Syed, Amir Ali; Siddiqui, Neelam

    2017-01-01

    Objective: To report experience with borderline ovarian tumors (BOTs) in a developing country like Pakistan with limited resources and weak database of health system. Methods: Patients with BOTs managed at Shaukat Khanum Cancer hospital, Lahore, Pakistan from 2004 to 2014 were included and reviewed retrospectively. Data was recorded on histopathological types, age, CA-125, stage of disease, treatment modalities and outcomes. Results: Eighty-six patients with BOT were included with a median age of 35 years. Forty-two (49%) patients had serous BOTs and 43 (50%) had mucinous BOTs, while one (1%) had mixed type. Using FIGO staging, 80 patients had stage I; two patients had IIA, IIB and stage III each. Median follow-up time was 31.5 months. All patients had primary surgery. Seventy (81%) patients underwent complete surgical resection of tumor. Forty-three (50%) patients had fertility preserving surgery. Seventy-three (85%) patients remained in remission. Recurrent disease was observed in 13 (15%) patients. Median time to recurrence was 22 months. On further analysis, age above forty years, late stage at diagnosis and incomplete surgery were significantly associated with invasive recurrence. Conclusion: Despite a low malignant potential, relapses may occur in patients above forty years of age, incomplete surgery and staging information and advanced stage at presentation. Fertility sparing surgery should be considered in young patients. Complete excision of tumor and prolonged follow-up are advised because recurrence and transformation to invasive carcinoma may occur. PMID:28523039

  18. 3122 Florence Lightcurve Analysis at Asteroids Observers (OBAS)- MPPD: 2017 Sep

    NASA Astrophysics Data System (ADS)

    Rodrigo, Onofre; Fornas, Gonzalo; Arce, Enrique; Mas, Vicente; Carreño, Alfonso; Brines, Pedro; Fornas, Alvaro; Herrro, David; Lozano, Juan; Garcia, Faustino

    2018-04-01

    We report on the results of photometric analysis of 3122 Florence, a near-Earth asteroid (NEA) by Asteroids Observers (OBAS). This work is part of the Minor Planet Photometric Database effort that was initiated by a group of Spanish amateur astronomers. We have managed to obtain a number of accurate and complete lightcurves as well as some additional incomplete lightcurves to help analysis at future oppositions.

  19. Twenty-one Asteroid Lightcurves at Asteroids Observers (OBAS) - MPPD: Nov 2016 - May 2017

    NASA Astrophysics Data System (ADS)

    Mas, Vicente; Fornas, G.; Lozano, Juan; Rodrigo, Onofre; Fornas, A.; Carreño, A.; Arce, Enrique; Brines, Pedro; Herrero, David

    2018-01-01

    We report on the analysis of photometric observations of 21 main-belt asteroids (MBA) done by Asteroids Observers (OBAS). This work is part of the Minor Planet Photometric Database task that was initiated by a group of Spanish amateur astronomers. We have managed to obtain a number of accurate and complete lightcurves as well as some additional incomplete lightcurves to help analysis at future oppositions.

  20. Social Interactions under Incomplete Information: Games, Equilibria, and Expectations

    NASA Astrophysics Data System (ADS)

    Yang, Chao

    My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.

  1. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.

  2. The extraction of drug-disease correlations based on module distance in incomplete human interactome.

    PubMed

    Yu, Liang; Wang, Bingbo; Ma, Xiaoke; Gao, Lin

    2016-12-23

    Extracting drug-disease correlations is crucial in unveiling disease mechanisms, as well as discovering new indications of available drugs, or drug repositioning. Both the interactome and the knowledge of disease-associated and drug-associated genes remain incomplete. We present a new method to predict the associations between drugs and diseases. Our method is based on a module distance, which is originally proposed to calculate distances between modules in incomplete human interactome. We first map all the disease genes and drug genes to a combined protein interaction network. Then based on the module distance, we calculate the distances between drug gene sets and disease gene sets, and take the distances as the relationships of drug-disease pairs. We also filter possible false positive drug-disease correlations by p-value. Finally, we validate the top-100 drug-disease associations related to six drugs in the predicted results. The overlapping between our predicted correlations with those reported in Comparative Toxicogenomics Database (CTD) and literatures, and their enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways demonstrate our approach can not only effectively identify new drug indications, but also provide new insight into drug-disease discovery.

  3. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews.

    PubMed

    Bramer, Wichor M; Giustini, Dean; Kramer, Bianca Mr; Anderson, Pf

    2013-12-23

    The usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here by testing whether GS can locate all studies included in 21 previously published SRs. Second, it examines the recall of GS, taking into account the maximum number of items that can be viewed, and tests whether more complete searches created by an information specialist will improve recall compared to the searches used in the 21 published SRs. The authors identified 21 biomedical SRs that had used GS and PubMed as information sources and reported their use of identical, reproducible search strategies in both databases. These search strategies were rerun in GS and PubMed, and analyzed as to their coverage and recall. Efforts were made to improve searches that underperformed in each database. GS' overall coverage was higher than PubMed (98% versus 91%) and overall recall is higher in GS: 80% of the references included in the 21 SRs were returned by the original searches in GS versus 68% in PubMed. Only 72% of the included references could be used as they were listed among the first 1,000 hits (the maximum number shown). Practical precision (the number of included references retrieved in the first 1,000, divided by 1,000) was on average 1.9%, which is only slightly lower than in other published SRs. Improving searches with the lowest recall resulted in an increase in recall from 48% to 66% in GS and, in PubMed, from 60% to 85%. Although its coverage and precision are acceptable, GS, because of its incomplete recall, should not be used as a single source in SR searching. A specialized, curated medical database such as PubMed provides experienced searchers with tools and functionality that help improve recall, and numerous options in order to optimize precision. Searches for SRs should be performed by experienced searchers creating searches that maximize recall for as many databases as deemed necessary by the search expert.

  4. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews

    PubMed Central

    2013-01-01

    Background The usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here by testing whether GS can locate all studies included in 21 previously published SRs. Second, it examines the recall of GS, taking into account the maximum number of items that can be viewed, and tests whether more complete searches created by an information specialist will improve recall compared to the searches used in the 21 published SRs. Methods The authors identified 21 biomedical SRs that had used GS and PubMed as information sources and reported their use of identical, reproducible search strategies in both databases. These search strategies were rerun in GS and PubMed, and analyzed as to their coverage and recall. Efforts were made to improve searches that underperformed in each database. Results GS’ overall coverage was higher than PubMed (98% versus 91%) and overall recall is higher in GS: 80% of the references included in the 21 SRs were returned by the original searches in GS versus 68% in PubMed. Only 72% of the included references could be used as they were listed among the first 1,000 hits (the maximum number shown). Practical precision (the number of included references retrieved in the first 1,000, divided by 1,000) was on average 1.9%, which is only slightly lower than in other published SRs. Improving searches with the lowest recall resulted in an increase in recall from 48% to 66% in GS and, in PubMed, from 60% to 85%. Conclusions Although its coverage and precision are acceptable, GS, because of its incomplete recall, should not be used as a single source in SR searching. A specialized, curated medical database such as PubMed provides experienced searchers with tools and functionality that help improve recall, and numerous options in order to optimize precision. Searches for SRs should be performed by experienced searchers creating searches that maximize recall for as many databases as deemed necessary by the search expert. PMID:24360284

  5. Entamoeba histolytica: construction and applications of subgenomic databases.

    PubMed

    Hofer, Margit; Duchêne, Michael

    2005-07-01

    Knowledge about the influence of environmental stress such as the action of chemotherapeutic agents on gene expression in Entamoeba histolytica is limited. We plan to use oligonucleotide microarray hybridization to approach these questions. As the basis for our array, sequence data from the genome project carried out by the Institute for Genomic Research (TIGR) and the Sanger Institute were used to annotate parts of the parasite genome. Three subgenomic databases containing enzymes, cytoskeleton genes, and stress genes were compiled with the help of the ExPASy proteomics website and the BLAST servers at the two genome project sites. The known sequences from reference species, mostly human and Escherichia coli, were searched against TIGR and Sanger E. histolytica sequence contigs and the homologs were copied into a Microsoft Access database. In a similar way, two additional databases of cytoskeletal genes and stress genes were generated. Metabolic pathways could be assembled from our enzyme database, but sometimes they were incomplete as is the case for the sterol biosynthesis pathway. The raw databases contained a significant number of duplicate entries which were merged to obtain curated non-redundant databases. This procedure revealed that some E. histolytica genes may have several putative functions. Representative examples such as the case of the delta-aminolevulinate synthase/serine palmitoyltransferase are discussed.

  6. Content-based video indexing and searching with wavelet transformation

    NASA Astrophysics Data System (ADS)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  7. Operative hysteroscopy versus vacuum aspiration for incomplete spontaneous abortion (HY-PER): study protocol for a randomized controlled trial.

    PubMed

    Huchon, Cyrille; Koskas, Martin; Agostini, Aubert; Akladios, Cherif; Alouini, Souhail; Bauville, Estelle; Bourdel, Nicolas; Fernandez, Hervé; Fritel, Xavier; Graesslin, Olivier; Legendre, Guillaume; Lucot, Jean-Philippe; Matheron, Isabelle; Panel, Pierre; Raiffort, Cyril; Fauconnier, Arnaud

    2015-08-19

    Incomplete spontaneous abortions are defined by the intrauterine retention of the products of conception after their incomplete or partial expulsion. This condition may be managed by expectant care, medical treatment or surgery. Vacuum aspiration is currently the standard surgical treatment in most centers. However, operative hysteroscopy has the advantage over vacuum aspiration of allowing the direct visualization of the retained conception product, facilitating its elective removal while limiting surgical complications. Inadequately powered retrospective studies reported subsequent fertility to be higher in patients treated by operative hysteroscopy than in those treated by vacuum aspiration. These data require confirmation in a randomized controlled trial comparing fertility rates between women undergoing hysteroscopy and those undergoing vacuum aspiration for incomplete spontaneous abortion. After providing written informed consent, 572 women with incomplete spontaneous abortion recruited from 15 centers across France will undergo randomization by a centralized computer system for treatment by either vacuum aspiration or operative hysteroscopy. Patients will not be informed of the type of treatment that they receive and will be cared for during their hospital stay in accordance with standard practices at each center. The patients will be monitored for pregnancy or adverse effects by a telephone conversation or questionnaire sent by e-mail or post over a period of two years. In cases of complications, failure of the intervention or diagnosis of uterine cavity disease, patient care will be left to the discretion of the medical center team. If our hypothesis is confirmed, this study will provide evidence that the use of operative hysteroscopy can increase the number of pregnancies continuing beyond 22 weeks of gestation in the two-year period following incomplete spontaneous abortion without increasing the incidence of morbidity and peri- and postoperative complications. The standard surgical treatment of this condition would thus be modified. This study would therefore have a large effect on the surgical management of incomplete spontaneous abortion. ClinicalTrials.gov Identifier: NCT02201732 ; registered on 17 July 2014.

  8. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  9. An empirical review of antimalarial quality field surveys: the importance of characterising outcomes.

    PubMed

    Grech, James; Robertson, James; Thomas, Jackson; Cooper, Gabrielle; Naunton, Mark; Kelly, Tamsin

    2018-01-05

    For decades, thousands of people have been dying from malaria infections because of poor-quality medicines (PQMs). While numerous efforts have been initiated to reduce their presence, PQMs are still risking the lives of those seeking treatment. This review addresses the importance of characterising results of antimalarial medicine field surveys based upon the agreement of clearly defined definitions. Medicines found to be of poor quality can be falsified or counterfeit, substandard or degraded. The distinction between these categories is important as each category requires a different countermeasure. To observe the current trends in the reporting of field surveys, a systematic literature search of six academic databases resulted in the quantitative analysis of 61 full-text journal articles. Information including sample size, sampling method, geographical regions, analytical techniques, and characterisation conclusions was observed for each. The lack of an accepted uniform reporting system has resulted in varying, incomplete reports, which may not include important information that helps form effective countermeasures. The programmes influencing medicine quality such as prequalification, procurement services, awareness and education can be supported with the information derived from characterised results. The implementation of checklists such as the Medicine Quality Assessment Reporting Guidelines will further strengthen the battle against poor-quality antimalarials. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The development of a new database of gas emissions: MAGA, a collaborative web environment for collecting data

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.

    2013-12-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.

  11. Explosive Growth and Advancement of the NASA/IPAC Extragalactic Database (NED)

    NASA Astrophysics Data System (ADS)

    Mazzarella, Joseph M.; Ogle, P. M.; Fadda, D.; Madore, B. F.; Ebert, R.; Baker, K.; Chan, H.; Chen, X.; Frayer, C.; Helou, G.; Jacobson, J. D.; LaGue, C.; Lo, T. M.; Pevunova, O.; Schmitz, M.; Terek, S.; Steer, I.

    2014-01-01

    The NASA/IPAC Extragalactic Database (NED) is continuing to evolve in lock-step with the explosive growth of astronomical data and advancements in information technology. A new methodology is being used to fuse data from very large surveys. Selected parameters are first loaded into a new database layer and made available in areal searches before they are cross-matched with prior NED objects. Then a programmed, rule-based statistical approach is used to identify new objects and compute cross-identifications with existing objects where possible; otherwise associations between objects are derived based on positional uncertainties or spatial resolution differences. Approximately 62 million UV sources from the GALEX All-Sky Survey and Medium Imaging Survey catalogs have been integrated into NED using this new process. The December 2013 release also contains nearly half a billion sources from the 2MASS Point Source Catalog accessible in cone searches, while the large scale cross-matching is in progress. Forthcoming updates will fuse data from All-WISE, SDSS DR12, and other very large catalogs. This work is progressing in parallel with the equally important integration of data from the literature, which is also growing rapidly. Recent updates have also included H I and CO channel maps (data cubes), as well as substantial growth in redshifts, classifications, photometry, spectra and redshift-independent distances. The By Parameters search engine now incorporates a simplified form for entry of constraints, and support for long-running queries with machine-readable output. A new tool for exploring the environments of galaxies with measured radial velocities includes informative graphics and a method to assess the incompleteness of redshift measurements. The NED user interface is also undergoing a major transformation, providing more streamlined navigation and searching, and a modern development framework for future enhancements. For further information, please visit our poster (Fadda et al. 2014) and stop by the NED exhibit for a demo. NED is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  12. Population-based risk for complications after transthoracic needle lung biopsy of a pulmonary nodule: an analysis of discharge records.

    PubMed

    Wiener, Renda Soylemez; Schwartz, Lisa M; Woloshin, Steven; Welch, H Gilbert

    2011-08-02

    Because pulmonary nodules are found in up to 25% of patients undergoing computed tomography of the chest, the question of whether to perform biopsy is becoming increasingly common. Data on complications after transthoracic needle lung biopsy are limited to case series from selected institutions. To determine population-based estimates of risks for complications after transthoracic needle biopsy of a pulmonary nodule. Cross-sectional analysis. The 2006 State Ambulatory Surgery Databases and State Inpatient Databases for California, Florida, Michigan, and New York from the Healthcare Cost and Utilization Project. 15 865 adults who had transthoracic needle biopsy of a pulmonary nodule. Percentage of biopsies complicated by hemorrhage, any pneumothorax, or pneumothorax requiring a chest tube, and adjusted odds ratios for these complications associated with various biopsy characteristics, calculated by using multivariate, population-averaged generalized estimating equations. Although hemorrhage was rare, complicating 1.0% (95% CI, 0.9% to 1.2%) of biopsies, 17.8% (CI, 11.8% to 23.8%) of patients with hemorrhage required a blood transfusion. In contrast, the risk for any pneumothorax was 15.0% (CI, 14.0% to 16.0%), and 6.6% (CI, 6.0% to 7.2%) of all biopsies resulted in pneumothorax requiring a chest tube. Compared with patients without complications, those who experienced hemorrhage or pneumothorax requiring a chest tube had longer lengths of stay (P < 0.001) and were more likely to develop respiratory failure requiring mechanical ventilation (P = 0.020). Patients aged 60 to 69 years (as opposed to younger or older patients), smokers, and those with chronic obstructive pulmonary disease had higher risk for complications. Estimated risks may be inaccurate if coding of complications is incomplete. The analyzed databases contain little clinical detail (such as information on nodule characteristics or biopsy pathology) and cannot indicate whether performing the biopsy produced useful information. Whereas hemorrhage is an infrequent complication of transthoracic needle lung biopsy, pneumothorax is common and often necessitates chest tube placement. These population-based data should help patients and physicians make more informed choices about whether to perform biopsy of a pulmonary nodule. Department of Veterans Affairs and National Cancer Institute.

  13. Developing a generalized allometric equation for aboveground biomass estimation

    NASA Astrophysics Data System (ADS)

    Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.

    2015-12-01

    A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species < genus). This approach provides estimation under incomplete tree information (e.g. missing species) or blurred information (e.g. conjecture of species), plus the biophysical environment. The GAE allowed us to quantify contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how much the error could be contributed to the original equations, the plant's taxonomy, and their biophysical environment.

  14. Barriers to childhood immunisation: Findings from the Longitudinal Study of Australian Children.

    PubMed

    Pearce, Anna; Marshall, Helen; Bedford, Helen; Lynch, John

    2015-06-26

    To examine barriers to childhood immunisation experienced by parents in Australia. Cross-sectional analysis of secondary data. Nationally representative Longitudinal Study of Australian Children (LSAC). Five thousand one hundred seven infants aged 3-19 months in 2004. Maternal report of immunisation status: incompletely or fully immunised. Overall, 9.3% (473) of infants were incompletely immunised; of these just 16% had mothers who disagreed with immunisation. Remaining analyses focussed on infants whose mother did not disagree with immunisation (N=4994) (of whom 8% [398] were incompletely immunised). Fifteen variables representing potential immunisation barriers and facilitators were available in LSAC; these were entered into a latent class model to identify distinct clusters (or 'classes') of barriers experienced by families. Five classes were identified: (1) 'minimal barriers', (2) 'lone parent, mobile families with good support', (3) 'low social contact and service information; psychological distress', (4) 'larger families, not using formal childcare', (5) 'child health issues/concerns'. Compared to infants from families experiencing minimal barriers, all other barrier classes had a higher risk of incomplete immunisation. For example, the adjusted risk ratio (RR) for incomplete immunisation was 1.51 (95% confidence interval: 1.08-2.10) among those characterised by 'low social contact and service information; psychological distress', and 2.47 (1.87-3.25) among 'larger families, not using formal childcare'. Using the most recent data available for examining these issues in Australia, we found that the majority of incompletely immunised infants (in 2004) did not have a mother who disagreed with immunisation. Barriers to immunisation are heterogeneous, suggesting a need for tailored interventions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Allergic reactions to peanuts, tree nuts, and seeds aboard commercial airliners.

    PubMed

    Comstock, Sarah S; DeMera, Rich; Vega, Laura C; Boren, Eric J; Deane, Sean; Haapanen, Lori A D; Teuber, Suzanne S

    2008-07-01

    Minimal data exist on the prevalence and characteristics of in-flight reactions to foods. To characterize reactions to foods experienced by passengers aboard commercial airplanes and to examine information about flying with a food allergy available from airlines. Telephone questionnaires were administered to individuals in a peanut, tree nut, and seed allergy database who self-reported reactions aboard aircraft. Airlines were contacted to obtain information on food allergy policies. Forty-one of 471 individuals reported allergic reactions to food while on airplanes, including 4 reporting more than 1 reaction. Peanuts accounted for most of the reactions. Twenty-one individuals (51%) treated their reactions during flight. Only 12 individuals (29%) reported the reaction to a flight attendant. Six individuals went to an emergency department after landing, including 1 after a flight diversion. Airline personnel were notified of only 3 of these severe reactions. Comparison of information given to 3 different investigators by airline customer service representatives showed that inconsistencies regarding important information occurred, such as whether the airline regularly serves peanuts. In this group of mainly adults with severe nut/seed allergy, approximately 9% reported experiencing an allergic reaction to food while on board an airplane. Some reactions were serious and potentially life-threatening. Individuals commonly did not inform airline personnel about their experiences. In addition, the quality of information about flying with food allergies available from customer service departments is highly variable and, in some cases, incomplete or inaccurate.

  16. Websites on Bladder Cancer: an Appropriate Source of Patient Information?

    PubMed

    Salem, Johannes; Paffenholz, Pia; Bolenz, Christian; von Brandenstein, Melanie; Cebulla, Angelika; Haferkamp, Axel; Kuru, Timur; Lee, Cheryl T; Pfister, David; Tsaur, Igor; Borgmann, Hendrik; Heidenreich, Axel

    2018-01-08

    A growing number of patients search for health information online. An early investigation of websites about bladder cancer (BCa) revealed mostly incomplete and particularly inaccurate information. We analyzed the quality, readability, and popularity of the most frequented websites on BCa. An Internet search on www.google.com was performed for the term "bladder cancer." After selecting the most frequented websites for patient information, HONcode quality certification, Alexa popularity rank, and readability scores (according to US grade levels) were investigated. A 36-point checklist was used to assess the content according to the EAU guidelines on BCa, which was categorized into seven topics. The popularity of the 49 websites analyzed was average, with a median Alexa popularity rank of 41,698 (interquartile range [IQR] 7-4,671,246). The readability was rated difficult with 11 years of school education needed to understand the information. Thirteen (27%) websites were HONcode certified. Out of 343 topics (seven EAU guideline topics each on 49 websites), 79% were mentioned on the websites. Of these, 10% contained incorrect information, mostly outdated or biased, and 34% contained incomplete information. Publically provided websites mentioned more topics per website (median [IQR] 7 [5.5-7] vs. 5.5 [3.3-7]; p = 0.022) and showed less incorrect information (median [IQR] 0 [0-1] vs. 1 [0-1]; p = 0.039) than physician-provided websites. Our study revealed mostly correct but partially incomplete information on BCa websites for patients. Physicians and public organizations should strive to keep their website information up-to-date and unbiased to optimize patients' health literacy.

  17. The Mass Function of Abell Clusters

    NASA Astrophysics Data System (ADS)

    Chen, J.; Huchra, J. P.; McNamara, B. R.; Mader, J.

    1998-12-01

    The velocity dispersion and mass functions for rich clusters of galaxies provide important constraints on models of the formation of Large-Scale Structure (e.g., Frenk et al. 1990). However, prior estimates of the velocity dispersion or mass function for galaxy clusters have been based on either very small samples of clusters (Bahcall and Cen 1993; Zabludoff et al. 1994) or large but incomplete samples (e.g., the Girardi et al. (1998) determination from a sample of clusters with more than 30 measured galaxy redshifts). In contrast, we approach the problem by constructing a volume-limited sample of Abell clusters. We collected individual galaxy redshifts for our sample from two major galaxy velocity databases, the NASA Extragalactic Database, NED, maintained at IPAC, and ZCAT, maintained at SAO. We assembled a database with velocity information for possible cluster members and then selected cluster members based on both spatial and velocity data. Cluster velocity dispersions and masses were calculated following the procedures of Danese, De Zotti, and di Tullio (1980) and Heisler, Tremaine, and Bahcall (1985), respectively. The final velocity dispersion and mass functions were analyzed in order to constrain cosmological parameters by comparison to the results of N-body simulations. Our data for the cluster sample as a whole and for the individual clusters (spatial maps and velocity histograms) in our sample is available on-line at http://cfa-www.harvard.edu/ huchra/clusters. This website will be updated as more data becomes available in the master redshift compilations, and will be expanded to include more clusters and large groups of galaxies.

  18. Improved Characterization of Combat Injury

    DTIC Science & Technology

    2010-05-01

    cause and mechanism of injury, wound descriptions, injuries, outcomes, and patient management from point of wounding onward. Because of the high numbers...fourth cervical vertebra (C4) with cord contusion and incomplete cord syndrome 640214.4 640214.5 From 4 to 5 MAIS 4 5 From severe to critical 2...entered into the SWM database, and analyzed for entrance site and wounding path. Results: When data on 1,151 patients , who had a total of 3,500 surface

  19. Experiment and Modelling of Itb Phenomena with Eccd on Tore Supra

    NASA Astrophysics Data System (ADS)

    Turco, F.; Giruzzi, G.; Artaud, J.-F.; Huysmans, G.; Imbeaux, F.; Maget, P.; Mazon, D.; Segui, J.-L.

    2009-04-01

    An extensive database of Tore Supra discharges with Internal Transport Barriers (ITBs) has been analysed. A tight correlation has been found, which links the central value of q and the creation of an ITB, while no correspondence with magnetic shear or qmin values can be inferred. In the case of incomplete transition to ITB (O-regime), modelling in presence of ECCD confirms the experimental observations about triggering/stopping and amplifying the oscillations.

  20. Predictions interact with missing sensory evidence in semantic processing areas.

    PubMed

    Scharinger, Mathias; Bendixen, Alexandra; Herrmann, Björn; Henry, Molly J; Mildner, Toralf; Obleser, Jonas

    2016-02-01

    Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  1. European option pricing under the Student's t noise with jumps

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Tian; Li, Zhe; Zhuang, Le

    2017-03-01

    In this paper we present a new approach to price European options under the Student's t noise with jumps. Through the conditional delta hedging strategy and the minimal mean-square-error hedging, a closed-form solution of the European option value is obtained under the incomplete information case. In particular, we propose a Value-at-Risk-type procedure to estimate the volatility parameter σ such that the pricing error is in accord with the risk preferences of investors. In addition, the numerical results of us show that options are not priced in some cases in an incomplete information market.

  2. Dynamic pattern matcher using incomplete data

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G. (Inventor); Wang, Lui (Inventor)

    1993-01-01

    This invention relates generally to pattern matching systems, and more particularly to a method for dynamically adapting the system to enhance the effectiveness of a pattern match. Apparatus and methods for calculating the similarity between patterns are known. There is considerable interest, however, in the storage and retrieval of data, particularly, when the search is called or initiated by incomplete information. For many search algorithms, a query initiating a data search requires exact information, and the data file is searched for an exact match. Inability to find an exact match thus results in a failure of the system or method.

  3. Negotiating identity and self-image: perceptions of falls in ambulatory individuals with spinal cord injury – a qualitative study

    PubMed Central

    Jørgensen, Vivien; Roaldsen, Kirsti Skavberg

    2016-01-01

    Objective: Explore and describe experiences and perceptions of falls, risk of falling, and fall-related consequences in individuals with incomplete spinal cord injury (SCI) who are still walking. Design: A qualitative interview study applying interpretive content analysis with an inductive approach. Setting: Specialized rehabilitation hospital. Subjects: A purposeful sample of 15 individuals (10 men), 23 to 78 years old, 2-34 years post injury with chronic incomplete traumatic SCI, and walking ⩾75% of time for mobility needs. Methods: Individual, semi-structured face-to-face interviews were recorded, condensed, and coded to find themes and subthemes. Results: One overarching theme was revealed: “Falling challenges identity and self-image as normal” which comprised two main themes “Walking with incomplete SCI involves minimizing fall risk and fall-related concerns without compromising identity as normal” and “Walking with incomplete SCI implies willingness to increase fall risk in order to maintain identity as normal”. Informants were aware of their increased fall risk and took precautions, but willingly exposed themselves to risky situations when important to self-identity. All informants expressed some conditional fall-related concerns, and a few experienced concerns limiting activity and participation. Conclusion: Ambulatory individuals with incomplete SCI considered falls to be a part of life. However, falls interfered with the informants’ identities and self-images as normal, healthy, and well-functioning. A few expressed dysfunctional concerns about falling, and interventions should target these. PMID:27170274

  4. Classifying with confidence from incomplete information.

    DOE PAGES

    Parrish, Nathan; Anderson, Hyrum S.; Gupta, Maya R.; ...

    2013-12-01

    For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize thismore » goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.« less

  5. A Systematic Review of Unmet Information and Psychosocial Support Needs of Adults Diagnosed with Thyroid Cancer.

    PubMed

    Hyun, Yong Gyu; Alhashemi, Ahmad; Fazelzad, Rouhi; Goldberg, Alyse S; Goldstein, David P; Sawka, Anna M

    2016-09-01

    Patient education and psychosocial support to patients are important elements of comprehensive cancer care, but the needs of thyroid cancer survivors are not well understood. The published English-language quantitative literature on (i) unmet medical information and (ii) psychosocial support needs of thyroid cancer survivors was systematically reviewed. A librarian information specialist searched seven electronic databases and a hand search was conducted. Two reviewers independently screened citations from the electronic search and reviewed relevant full-text papers. There was consensus between reviewers on the included papers, and duplicate independent abstraction was performed. The results were summarized descriptively. A total of 1984 unique electronic citations were screened, and 51 full-text studies were reviewed (three from the hand search). Seven cross-sectional, single-arm, survey studies were included, containing data from 6215 thyroid cancer survivor respondents. The respective study sizes ranged from 57 to 2398 subjects. All of the studies had some methodological limitations. Unmet information needs were variable relating to the disease, diagnostic tests, treatments, and co-ordination of medical care. There were relatively high unmet information needs related to aftercare (especially long-term effects of the disease or its treatment and its management) and psychosocial concerns (including practical and financial matters). Psychosocial support needs were incompletely met. Patient information on complementary and alternative medicine was very limited. In conclusion, thyroid cancer survivors perceive many unmet information needs, and these needs extend to aftercare. Psychosocial information and supportive care needs may be insufficiently met in this population. More work is needed to improve knowledge translation and psychosocial support for thyroid cancer survivors.

  6. Disregarding familiarity during recollection attempts: content-specific recapitulation as a retrieval orientation strategy.

    PubMed

    Gray, Stephen J; Gallo, David A

    2015-01-01

    People can use a content-specific recapitulation strategy to trigger memories (i.e., mentally reinstating encoding conditions), but how people deploy this strategy is unclear. Is recapitulation naturally used to guide all recollection attempts, or is it only used selectively, after retrieving incomplete information that requires additional monitoring? According to a retrieval orientation model, people use recapitulation whenever they search memory for specific information, regardless of what information might come to mind. In contrast, according to a postretrieval monitoring model, people selectively engage recapitulation only after retrieving ambiguous information in order to evaluate this information and guide additional retrieval attempts. We tested between these models using a criterial recollection task, and by manipulating the strength of ambiguous information associated with to-be-rejected foils (i.e., familiarity or noncriterial information). Replicating prior work, foil rejections were greater when people attempted to recollect targets studied at a semantic level (deep test) compared to an orthographic level (shallow test), implicating more accurate retrieval monitoring. To investigate the role of a recapitulation strategy in this monitoring process, a final test assessed memory for the foils that were earlier processed on these recollection tests. Performance on this foil recognition test suggested that people had engaged in more elaborative content-specific recapitulation when initially tested for deep compared to shallow recollections, and critically, this elaboration effect did not interact with the experimental manipulation of foil strength. These results support the retrieval orientation model, whereby a recapitulation strategy was used to orient retrieval toward specific information during every recollection attempt. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information

    NASA Technical Reports Server (NTRS)

    Marchionini, Gary; Barlow, Diane

    1994-01-01

    An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.

  8. Interventions to Improve the Quality of Outpatient Specialty Referral Requests: A Systematic Review.

    PubMed

    Hendrickson, Chase D; Lacourciere, Stacy L; Zanetti, Cole A; Donaldson, Patrick C; Larson, Robin J

    2016-09-01

    Requests for outpatient specialty consultations occur frequently but often are of poor quality because of incompleteness. The authors searched bibliographic databases, trial registries, and references during October 2014 for studies evaluating interventions to improve the quality of outpatient specialty referral requests compared to usual practice. Two reviewers independently extracted data and assessed quality. Findings were qualitatively summarized for completeness of information relayed in a referral request within naturally emerging intervention categories. Of 3495 articles screened, 11 were eligible. All 3 studies evaluating software-based interventions found statistically significant improvements. Among 4 studies evaluating template/pro forma interventions, completeness was uniformly improved but with variable or unreported statistical significance. Of 4 studies evaluating educational interventions, 2 favored the intervention and 2 found no difference. One study evaluating referral management was negative. Current evidence for improving referral request quality is strongest for software-based interventions and templates, although methodological quality varied and findings may be setting specific. © The Author(s) 2015.

  9. PathFinder: reconstruction and dynamic visualization of metabolic pathways.

    PubMed

    Goesmann, Alexander; Haubrock, Martin; Meyer, Folker; Kalinowski, Jörn; Giegerich, Robert

    2002-01-01

    Beyond methods for a gene-wise annotation and analysis of sequenced genomes new automated methods for functional analysis on a higher level are needed. The identification of realized metabolic pathways provides valuable information on gene expression and regulation. Detection of incomplete pathways helps to improve a constantly evolving genome annotation or discover alternative biochemical pathways. To utilize automated genome analysis on the level of metabolic pathways new methods for the dynamic representation and visualization of pathways are needed. PathFinder is a tool for the dynamic visualization of metabolic pathways based on annotation data. Pathways are represented as directed acyclic graphs, graph layout algorithms accomplish the dynamic drawing and visualization of the metabolic maps. A more detailed analysis of the input data on the level of biochemical pathways helps to identify genes and detect improper parts of annotations. As an Relational Database Management System (RDBMS) based internet application PathFinder reads a list of EC-numbers or a given annotation in EMBL- or Genbank-format and dynamically generates pathway graphs.

  10. Ancient and modern environmental DNA

    PubMed Central

    Pedersen, Mikkel Winther; Overballe-Petersen, Søren; Ermini, Luca; Sarkissian, Clio Der; Haile, James; Hellstrom, Micaela; Spens, Johan; Thomsen, Philip Francis; Bohmann, Kristine; Cappellini, Enrico; Schnell, Ida Bærholm; Wales, Nathan A.; Carøe, Christian; Campos, Paula F.; Schmidt, Astrid M. Z.; Gilbert, M. Thomas P.; Hansen, Anders J.; Orlando, Ludovic; Willerslev, Eske

    2015-01-01

    DNA obtained from environmental samples such as sediments, ice or water (environmental DNA, eDNA), represents an important source of information on past and present biodiversity. It has revealed an ancient forest in Greenland, extended by several thousand years the survival dates for mainland woolly mammoth in Alaska, and pushed back the dates for spruce survival in Scandinavian ice-free refugia during the last glaciation. More recently, eDNA was used to uncover the past 50 000 years of vegetation history in the Arctic, revealing massive vegetation turnover at the Pleistocene/Holocene transition, with implications for the extinction of megafauna. Furthermore, eDNA can reflect the biodiversity of extant flora and fauna, both qualitatively and quantitatively, allowing detection of rare species. As such, trace studies of plant and vertebrate DNA in the environment have revolutionized our knowledge of biogeography. However, the approach remains marred by biases related to DNA behaviour in environmental settings, incomplete reference databases and false positive results due to contamination. We provide a review of the field. PMID:25487334

  11. Characterisation of Asian Snakehead Murrel Channa striata (Channidae) in Malaysia: An Insight into Molecular Data and Morphological Approach

    PubMed Central

    Song, Li Min; Munian, Kaviarasu; Abd Rashid, Zulkafli; Bhassu, Subha

    2013-01-01

    Conservation is imperative for the Asian snakeheads Channa striata, as the species has been overfished due to its high market demand. Using maternal markers (mitochondrial cytochrome c oxidase subunit 1 gene (COI)), we discovered that evolutionary forces that drove population divergence did not show any match between the genetic and morphological divergence pattern. However, there is evidence of incomplete divergence patterns between the Borneo population and the populations from Peninsular Malaysia. This supports the claim of historical coalescence of C. striata during Pleistocene glaciations. Ecological heterogeneity caused high phenotypic variance and was not correlated with genetic variance among the populations. Spatial conservation assessments are required to manage different stock units. Results on DNA barcoding show no evidence of cryptic species in C. striata in Malaysia. The newly obtained sequences add to the database of freshwater fish DNA barcodes and in future will provide information relevant to identification of species. PMID:24396312

  12. Current Development at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.; Clayton, R. W.

    2005-12-01

    Over the past year, the SCEDC completed or is near completion of three featured projects: Station Information System (SIS) Development: The SIS will provide users with an interface into complete and accurate station metadata for all current and historic data at the SCEDC. The goal of this project is to develop a system that can interact with a single database source to enter, update and retrieve station metadata easily and efficiently. The system will provide accurate station/channel information for active stations to the SCSN real-time processing system, as will as station/channel information for stations that have parametric data at the SCEDC i.e., for users retrieving data via STP. Additionally, the SIS will supply information required to generate dataless SEED and COSMOS V0 volumes and allow stations to be added to the system with a minimum, but incomplete set of information using predefined defaults that can be easily updated as more information becomes available. Finally, the system will facilitate statewide metadata exchange for both real-time processing and provide a common approach to CISN historic station metadata. Moment Tensor Solutions: The SCEDC is currently archiving and delivering Moment Magnitudes and Moment Tensor Solutions (MTS) produced by the SCSN in real-time and post-processing solutions for events spanning back to 1999. The automatic MTS runs on all local events with magnitudes > 3.0, and all regional events > 3.5. The distributed solution automatically creates links from all USGS Simpson Maps to a text e-mail summary solution, creates a .gif image of the solution, and updates the moment tensor database tables at the SCEDC. Searchable Scanned Waveforms Site: The Caltech Seismological Lab has made available 12,223 scanned images of pre-digital analog recordings of major earthquakes recorded in Southern California between 1962 and 1992 at http://www.data.scec.org/research/scans/. The SCEDC has developed a searchable web interface that allows users to search the available files, select multiple files for download and then retrieve a zipped file containing the results. Scanned images of paper records for M>3.5 southern California earthquakes and several significant teleseisms are available for download via the SCEDC through this search tool.

  13. An Assessment of Database-Validated microRNA Target Genes in Normal Colonic Mucosa: Implications for Pathway Analysis.

    PubMed

    Slattery, Martha L; Herrick, Jennifer S; Stevens, John R; Wolff, Roger K; Mullany, Lila E

    2017-01-01

    Determination of functional pathways regulated by microRNAs (miRNAs), while an essential step in developing therapeutics, is challenging. Some miRNAs have been studied extensively; others have limited information. In this study, we focus on 254 miRNAs previously identified as being associated with colorectal cancer and their database-identified validated target genes. We use RNA-Seq data to evaluate messenger RNA (mRNA) expression for 157 subjects who also had miRNA expression data. In the replication phase of the study, we replicated associations between 254 miRNAs associated with colorectal cancer and mRNA expression of database-identified target genes in normal colonic mucosa. In the discovery phase of the study, we evaluated expression of 18 miR-NAs (those with 20 or fewer database-identified target genes along with miR-21-5p, miR-215-5p, and miR-124-3p which have more than 500 database-identified target genes) with expression of 17 434 mRNAs to identify new targets in colon tissue. Seed region matches between miRNA and newly identified targeted mRNA were used to help determine direct miRNA-mRNA associations. From the replication of the 121 miRNAs that had at least 1 database-identified target gene using mRNA expression methods, 97.9% were expressed in normal colonic mucosa. Of the 8622 target miRNA-mRNA associations identified in the database, 2658 (30.2%) were associated with gene expression in normal colonic mucosa after adjusting for multiple comparisons. Of the 133 miRNAs with database-identified target genes by non-mRNA expression methods, 97.2% were expressed in normal colonic mucosa. After adjustment for multiple comparisons, 2416 miRNA-mRNA associations remained significant (19.8%). Results from the discovery phase based on detailed examination of 18 miRNAs identified more than 80 000 miRNA-mRNA associations that had not previously linked to the miRNA. Of these miRNA-mRNA associations, 15.6% and 14.8% had seed matches for CRCh38 and CRCh37, respectively. Our data suggest that miRNA target gene databases are incomplete; pathways derived from these databases have similar deficiencies. Although we know a lot about several miRNAs, little is known about other miRNAs in terms of their targeted genes. We encourage others to use their data to continue to further identify and validate miRNA-targeted genes.

  14. Validity of data in the Danish Colorectal Cancer Screening Database.

    PubMed

    Thomsen, Mette Kielsholm; Njor, Sisse Helle; Rasmussen, Morten; Linnemann, Dorte; Andersen, Berit; Baatrup, Gunnar; Friis-Hansen, Lennart Jan; Jørgensen, Jens Christian Riis; Mikkelsen, Ellen Margrethe

    2017-01-01

    In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes "clean colon", "colonoscopy performed", "overall completeness of colonoscopy", "incomplete colonoscopy", "polypectomy", "tumor tissue left behind", "number of polyps", "lost polyps", "risk group of polyps" and "colorectal cancer and polyps/benign tumor". Hospital records were available for 136 individuals. Agreement was highest for "colorectal cancer" (97.1%) and lowest for "lost polyps" (88.2%). Sensitivity varied between moderate and high, with 60.0% for "incomplete colonoscopy" and 98.5% for "colonoscopy performed". Specificity was 92.7% or above, except for the categories "colonoscopy performed" and "overall completeness of colonoscopy", where the specificity was low; however, the estimates were imprecise. A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening.

  15. Biomedical journals and databases in Russia and Russian language in the former Soviet Union and beyond

    PubMed Central

    Vlassov, Vasiliy V; Danishevskiy, Kirill D

    2008-01-01

    In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon) that have appeared recently are incomplete and do not enable effective searching. PMID:18826569

  16. Biomedical journals and databases in Russia and Russian language in the former Soviet Union and beyond.

    PubMed

    Vlassov, Vasiliy V; Danishevskiy, Kirill D

    2008-09-30

    In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon) that have appeared recently are incomplete and do not enable effective searching.

  17. Comprehensive analysis of the N-glycan biosynthetic pathway using bioinformatics to generate UniCorn: A theoretical N-glycan structure database.

    PubMed

    Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P

    2016-08-05

    Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A dedicated database system for handling multi-level data in systems biology.

    PubMed

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  19. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  20. Leap Before You Look: Information Gathering In the PUCCINI Planner

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Lau, Sonie (Technical Monitor)

    1998-01-01

    Most of the work in planning with incomplete information takes a "look before you leap" perspective: Actions must be guaranteed to have their intended effects before they can be executed. We argue that this approach is impossible to follow in many real-world domains. The agent may not have enough information to ensure that an action will have a given effect in advance of executing it. This paper describes PUCCINI, a partial order planner used to control the Internet Softbot (Etzioni & Weld 1994). PUCCINI takes a different approach to coping with incomplete information: "Leap before you look!" PUCCINI doesn't require actions to be known to have the desired effects before execution. However, it still maintains soundness, by requiring the effects to be verified eventually. We discuss how this is achieved using a simple generalization of causal links.

  1. Systematic review of the methodological and reporting quality of case series in surgery.

    PubMed

    Agha, R A; Fowler, A J; Lee, S-Y; Gundogan, B; Whitehurst, K; Sagoo, H K; Jeong, K J L; Altman, D G; Orgill, D P

    2016-09-01

    Case series are an important and common study type. No guideline exists for reporting case series and there is evidence of key data being missed from such reports. The first step in the process of developing a methodologically sound reporting guideline is a systematic review of literature relevant to the reporting deficiencies of case series. A systematic review of methodological and reporting quality in surgical case series was performed. The electronic search strategy was developed by an information specialist and included MEDLINE, Embase, Cochrane Methods Register, Science Citation Index and Conference Proceedings Citation index, from the start of indexing to 5 November 2014. Independent screening, eligibility assessments and data extraction were performed. Included articles were then analysed for five areas of deficiency: failure to use standardized definitions, missing or selective data (including the omission of whole cases or important variables), transparency or incomplete reporting, whether alternative study designs were considered, and other issues. Database searching identified 2205 records. Through the process of screening and eligibility assessments, 92 articles met inclusion criteria. Frequencies of methodological and reporting issues identified were: failure to use standardized definitions (57 per cent), missing or selective data (66 per cent), transparency or incomplete reporting (70 per cent), whether alternative study designs were considered (11 per cent) and other issues (52 per cent). The methodological and reporting quality of surgical case series needs improvement. The data indicate that evidence-based guidelines for the conduct and reporting of case series may be useful. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  2. Auditory compensation for head rotation is incomplete.

    PubMed

    Freeman, Tom C A; Culling, John F; Akeroyd, Michael A; Brimijoin, W Owen

    2017-02-01

    Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These "extraretinal signals" compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the "movement gain" relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    NASA Astrophysics Data System (ADS)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low seismic potential provinces. Seismotectonic properties of very high seismic potential provinces have been also presented.

  4. Nutrient loadings to streams of the continental United States from municipal and industrial effluent?

    USGS Publications Warehouse

    Maupin, Molly A.; Ivahnenko, Tamara

    2011-01-01

    Data from the United States Environmental Protection Agency Permit Compliance System national database were used to calculate annual total nitrogen (TN) and total phosphorus (TP) loads to surface waters from municipal and industrial facilities in six major regions of the United States for 1992, 1997, and 2002. Concentration and effluent flow data were examined for approximately 118,250 facilities in 45 states and the District of Columbia. Inconsistent and incomplete discharge locations, effluent flows, and effluent nutrient concentrations limited the use of these data for calculating nutrient loads. More concentrations were reported for major facilities, those discharging more than 1 million gallons per day, than for minor facilities, and more concentrations were reported for TP than for TN. Analytical methods to check and improve the quality of the Permit Compliance System data were used. Annual loads were calculated using "typical pollutant concentrations" to supplement missing concentrations based on the type and size of facilities. Annual nutrient loads for over 26,600 facilities were calculated for at least one of the three years. Sewage systems represented 74% of all TN loads and 58% of all TP loads. This work represents an initial set of data to develop a comprehensive and consistent national database of point-source nutrient loads. These loads can be used to inform a wide range of water-quality management, watershed modeling, and research efforts at multiple scales.

  5. Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.

    PubMed

    Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K

    2011-01-01

    Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.

  6. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations.

    PubMed

    Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs

    2018-01-01

    The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43-1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = -0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.

  7. Adaptive Mechanisms for Treating Missing Information: A Simulation Study

    ERIC Educational Resources Information Center

    Garcia-Retamero, Rocio; Rieskamp, Jorg

    2008-01-01

    People often make inferences with incomplete information. Previous research has led to a mixed picture of how people treat missing information. To explain these results, the authors follow the Brunswikian perspective on human inference and hypothesize that the mechanism's accuracy for treating missing information depends on how it is distributed…

  8. Managers' Informal Learning: A Trait Activation Theory Perspective

    ERIC Educational Resources Information Center

    Noe, Raymond A.; Tews, Michael J.; Michel, John W.

    2017-01-01

    Research focusing on how individual differences and the work context influence informal learning is growing but incomplete. This study contributes to our understanding of the antecedents of informal learning by examining the relationships of goal orientation, job autonomy and training climate with informal learning. Based on trait activation…

  9. Discordance of conflict of interest self-disclosure and the Centers of Medicare and Medicaid Services.

    PubMed

    Cherla, Deepa V; Olavarria, Oscar A; Holihan, Julie L; Viso, Cristina Perez; Hannon, Craig; Kao, Lillian S; Ko, Tien C; Liang, Mike K

    2017-10-01

    The Open Payments Database (OPD) discloses financial transactions between manufacturers and physicians. The concordance of OPD versus self-reported conflicts of interest (COI) is unknown. Our objectives were to compare (1) industry and self-disclosed COI in clinical literature, (2) payments within each disclosure level, and (3) industry- and self-disclosed COI and payments by specialty. This was an observational study. PubMed was searched for clinical studies accepted for publication from January 2014 to June 2016. Author and OPD-disclosed COIs were compared. Articles and authors were divided into full disclosure, incomplete industry disclosure, incomplete self-disclosure, and no COI. Primary outcome (differences in reported COI per article) was assessed using McNemar's test. Payment differences were compared using Kruskal-Wallis test. OPD- and self-disclosed COI differed (65.0% discordance rate by article, P < 0.001). Percentages of authors within each disclosure category differed between specialties (P < 0.001). Hematology articles exhibited the highest discordance rate (79.0%) and received the highest median payment for incomplete self-disclosure ($30,812). Significant discordance exists between self- and OPD-reported COI. Additional research is needed to determine reasons for these differences. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  11. How Students Evaluate Information and Sources when Searching the World Wide Web for Information

    ERIC Educational Resources Information Center

    Walraven, Amber; Brand-Gruwel, Saskia; Boshuizen, Henny P. A.

    2009-01-01

    The World Wide Web (WWW) has become the biggest information source for students while solving information problems for school projects. Since anyone can post anything on the WWW, information is often unreliable or incomplete, and it is important to evaluate sources and information before using them. Earlier research has shown that students have…

  12. 31 CFR 1.28 - Training, rules of conduct, penalties for non-compliance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... be maintained under the provisions of the Privacy Act of 1974, including information on First Amendment Activities, information that is inaccurate, irrelevant or so incomplete as to risk unfairness to...

  13. 31 CFR 1.28 - Training, rules of conduct, penalties for non-compliance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... be maintained under the provisions of the Privacy Act of 1974, including information on First Amendment Activities, information that is inaccurate, irrelevant or so incomplete as to risk unfairness to...

  14. 31 CFR 1.28 - Training, rules of conduct, penalties for non-compliance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... be maintained under the provisions of the Privacy Act of 1974, including information on First Amendment Activities, information that is inaccurate, irrelevant or so incomplete as to risk unfairness to...

  15. 31 CFR 1.28 - Training, rules of conduct, penalties for non-compliance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... be maintained under the provisions of the Privacy Act of 1974, including information on First Amendment Activities, information that is inaccurate, irrelevant or so incomplete as to risk unfairness to...

  16. 49 CFR 604.18 - Request for an advisory opinion.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... B. An affirmation that the undersigned swears, to the best of his/her knowledge and belief, this... opinion may be denied if: (1) The request contains incomplete information on which to base an informed...

  17. Blood N-terminal Pro-brain Natriuretic Peptide and Interleukin-17 for Distinguishing Incomplete Kawasaki Disease from Infectious Diseases.

    PubMed

    Wu, Ling; Chen, Yuanling; Zhong, Shiling; Li, Yunyan; Dai, Xiahua; Di, Yazhen

    2015-06-01

    To explore the diagnostic value of blood N-terminal pro-brain natriuretic peptide (NT-proBNP) and interleukin-17(IL-17) for incomplete Kawasaki disease. Patients with Kawasaki disease, Incomplete Kawasaki disease and unclear infectious fever were included in this retrospective study. Their clinical features, and laboratory test results of blood NT-proBNP and IL-17 were collected and compared. 766 patients with complete clinical information were recruited, consisting of 291 cases of Kawasaki disease, 74 cases of incomplete Kawasaki disease, and 401 cases of unclear infectious diseases. When the consistency with indicator 2 and 3 in Kawasaki disease diagnosis criteria was assessed with blood IL-17 ?11.55 pg/mL and blood NT-proBNP ? 225.5 pg/dL as the criteria, the sensitivity and specificity for distinguishing incomplete Kawasaki disease and infectious diseases reached 86.5% and 94.8%, respectively. When we chose the consistency with indicator 1 and 2 in Kawasaki disease diagnosis criteria, the appearance of decrustation and/or the BCG erythema, blood IL-17 ?11.55 pg/mL and blood NT-Pro BNP ?225.5 pg/dL as the criteria, the sensitivity and specificity for distinguishing incomplete Kawasaki disease and infectious diseases was 43.2% and 100%, respectively. Blood NT-proBNP and IL-17 are useful laboratory indicators for distinguishing incomplete Kawasaki disease and infectious diseases at the early stage.

  18. Information Leakage from Logically Equivalent Frames

    ERIC Educational Resources Information Center

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  19. A Knowledge-Based Approach to Information Fusion for the Support of Military Intelligence

    DTIC Science & Technology

    2004-03-01

    and most reliable an appropriate picture of the battlespace. The presented approach of knowledge based information fusion is focussing on the...incomplete and imperfect information of military reports and background knowledge can be supported substantially in an automated system. Keywords

  20. A consensus reaching model for 2-tuple linguistic multiple attribute group decision making with incomplete weight information

    NASA Astrophysics Data System (ADS)

    Zhang, Wancheng; Xu, Yejun; Wang, Huimin

    2016-01-01

    The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.

  1. Using Crowdsourced Trajectories for Automated OSM Data Entry Approach

    PubMed Central

    Basiri, Anahid; Amirian, Pouria; Mooney, Peter

    2016-01-01

    The concept of crowdsourcing is nowadays extensively used to refer to the collection of data and the generation of information by large groups of users/contributors. OpenStreetMap (OSM) is a very successful example of a crowd-sourced geospatial data project. Unfortunately, it is often the case that OSM contributor inputs (including geometry and attribute data inserts, deletions and updates) have been found to be inaccurate, incomplete, inconsistent or vague. This is due to several reasons which include: (1) many contributors with little experience or training in mapping and Geographic Information Systems (GIS); (2) not enough contributors familiar with the areas being mapped; (3) contributors having different interpretations of the attributes (tags) for specific features; (4) different levels of enthusiasm between mappers resulting in different number of tags for similar features and (5) the user-friendliness of the online user-interface where the underlying map can be viewed and edited. This paper suggests an automatic mechanism, which uses raw spatial data (trajectories of movements contributed by contributors to OSM) to minimise the uncertainty and impact of the above-mentioned issues. This approach takes the raw trajectory datasets as input and analyses them using data mining techniques. In addition, we extract some patterns and rules about the geometry and attributes of the recognised features for the purpose of insertion or editing of features in the OSM database. The underlying idea is that certain characteristics of user trajectories are directly linked to the geometry and the attributes of geographic features. Using these rules successfully results in the generation of new features with higher spatial quality which are subsequently automatically inserted into the OSM database. PMID:27649192

  2. The GTN-P Data Management System: A central database for permafrost monitoring parameters of the Global Terrestrial Network for Permafrost (GTN-P) and beyond

    NASA Astrophysics Data System (ADS)

    Lanckman, Jean-Pierre; Elger, Kirsten; Karlsson, Ævar Karl; Johannsson, Halldór; Lantuit, Hugues

    2013-04-01

    Permafrost is a direct indicator of climate change and has been identified as Essential Climate Variable (ECV) by the global observing community. The monitoring of permafrost temperatures, active-layer thicknesses and other parameters has been performed for several decades already, but it was brought together within the Global Terrestrial Network for Permafrost (GTN-P) in the 1990's only, including the development of measurement protocols to provide standardized data. GTN-P is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). All GTN-P data was outfitted with an "open data policy" with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: it is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of the programs. While the monitoring of many other ECVs has been tackled by organized international networks (e.g. FLUXNET), there is still no central database for all permafrost-related parameters. The European Union project PAGE21 created opportunities to develop this central database for permafrost monitoring parameters of GTN-P during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata, and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with the GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object oriented model (OOM), open for as many parameters as possible, and implemented into a spatial database. To ensure interoperability and enable potential inter-database search, field names are following international metadata standards and are based on a control vocabulary registry. Tools are developed to provide data processing, analysis capability, and quality control. Our system aims to be a reference model, improvable and reusable. It allows a maximum top-down and bottom-up data flow, giving scientists one global searchable data and metadata repository, the public a full access to scientific data, and the policy maker a powerful cartographic and statistical tool. To engage the international community in GTN-P, it was essential to develop an online interface for data upload. Aim for this was that it is easy-to-use and allows data input with a minimum of technical and personal effort. In addition to this, large efforts will have to be produced in order to be able to query, visualize and retrieve information over many platforms and type of measurements. Ultimately, it is not the layer in itself that matter, but more the relationship that these information layers maintain with each other.

  3. Levels and trends of child and adult mortality rates in the Islamic Republic of Iran, 1990-2013; protocol of the NASBOD study.

    PubMed

    Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran

    2014-03-01

    Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.

  4. Automated analysis of high-throughput B-cell sequencing data reveals a high frequency of novel immunoglobulin V gene segment alleles.

    PubMed

    Gadala-Maria, Daniel; Yaari, Gur; Uduman, Mohamed; Kleinstein, Steven H

    2015-02-24

    Individual variation in germline and expressed B-cell immunoglobulin (Ig) repertoires has been associated with aging, disease susceptibility, and differential response to infection and vaccination. Repertoire properties can now be studied at large-scale through next-generation sequencing of rearranged Ig genes. Accurate analysis of these repertoire-sequencing (Rep-Seq) data requires identifying the germline variable (V), diversity (D), and joining (J) gene segments used by each Ig sequence. Current V(D)J assignment methods work by aligning sequences to a database of known germline V(D)J segment alleles. However, existing databases are likely to be incomplete and novel polymorphisms are hard to differentiate from the frequent occurrence of somatic hypermutations in Ig sequences. Here we develop a Tool for Ig Genotype Elucidation via Rep-Seq (TIgGER). TIgGER analyzes mutation patterns in Rep-Seq data to identify novel V segment alleles, and also constructs a personalized germline database containing the specific set of alleles carried by a subject. This information is then used to improve the initial V segment assignments from existing tools, like IMGT/HighV-QUEST. The application of TIgGER to Rep-Seq data from seven subjects identified 11 novel V segment alleles, including at least one in every subject examined. These novel alleles constituted 13% of the total number of unique alleles in these subjects, and impacted 3% of V(D)J segment assignments. These results reinforce the highly polymorphic nature of human Ig V genes, and suggest that many novel alleles remain to be discovered. The integration of TIgGER into Rep-Seq processing pipelines will increase the accuracy of V segment assignments, thus improving B-cell repertoire analyses.

  5. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review.

    PubMed

    Gohari, Faeze; Baradaran, Hamid Reza; Tabatabaee, Morteza; Anijidani, Shabnam; Mohammadpour Touserkani, Fatemeh; Atlasi, Rasha; Razmgir, Maryam

    2015-01-01

    To determine the quality of randomized controlled clinical trial (RCT) reports in diabetes research in Iran. Systematized review. We included RCTs conducted on diabetes mellitus in Iran. Animal studies, educational interventions, and non-randomized trials were excluded. We excluded duplicated publications reporting the same groups of participants and intervention. Two independent reviewers identify all eligible articles specifically designed data extraction form. We searched through international databases; Scopus, ProQuest, EBSCO, Science Direct, Web of Science, Cochrane Library, PubMed; and national databases (In Persian language) such as Magiran, Scientific Information Database (SID) and IranMedex from January 1995 to January of 2013 Two investigators assessed the quality of reporting by CONSORT 2010 (Consolidated Standards of Reporting Trials) checklist statemen.t,. Discrepancies were resolved by third reviewer consulting. One hundred and eight five (185) studies were included and appraised. Half of them (55.7 %) were published in Iranian journals. Most (89.7 %) were parallel RCTs, and being performed on type2 diabetic patients (77.8 %). Less than half of the CONSORT items (43.2 %) were reported in studies, totally. The reporting of randomization and blinding were poor. A few studies 15.1 % mentioned the method of random sequence generation and strategy of allocation concealment. And only 34.8 % of trials report how blinding was applied. The findings of this study show that the quality of RCTs conducted in Iran in diabetes research seems suboptimal and the reporting is also incomplete however an increasing trend of improvement can be seen over time. Therefore, it is suggested Iranian researchers pay much more attention to design and methodological quality in conducting and reporting of diabetes RCTs.

  6. Quantum process tomography with informational incomplete data of two J-coupled heterogeneous spins relaxation in a time window much greater than T1

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Vianna, Reinaldo O.; Sarthour, Roberto S.; Oliveira, Ivan S.

    2015-11-01

    We reconstruct the time dependent quantum map corresponding to the relaxation process of a two-spin system in liquid-state NMR at room temperature. By means of quantum tomography techniques that handle informational incomplete data, we show how to properly post-process and normalize the measurements data for the simulation of quantum information processing, overcoming the unknown number of molecules prepared in a non-equilibrium magnetization state (Nj) by an initial sequence of radiofrequency pulses. From the reconstructed quantum map, we infer both longitudinal (T1) and transversal (T2) relaxation times, and introduce the J-coupling relaxation times ({T}1J,{T}2J), which are relevant for quantum information processing simulations. We show that the map associated to the relaxation process cannot be assumed approximated unital and trace-preserving for times greater than {T}2J.

  7. "Parisara": Developing a Collaborative, Free, Public Domain Knowledge Resource on Indian Environment

    ERIC Educational Resources Information Center

    Gadgil, Madhav

    2012-01-01

    To address the important challenge of taking good care of India's environment, substantial, good quality information is crucial. Unfortunately, pertinent information is in very short supply. Much of the nationally collected information lacks quality and is incomplete. Modern science has demonstrated that good information flows from an open,…

  8. Correlation between compliance and brace treatment in juvenile and adolescent idiopathic scoliosis: SOSORT 2014 award winner

    PubMed Central

    2014-01-01

    Background Over the last years, evidence has accumulated in support of bracing as an effective treatment option in patients with idiopathic scoliosis. Yet, little information is available on the impact of compliance on the outcome of conservative treatment in scoliotic subjects. The aim of the present study was to prospectively evaluate the association between compliance to brace treatment and the progression of scoliotic curve in patients with idiopathic adolescent (AIS) or juvenile scoliosis (JIS). Methods Among 1.424 patients treated for idiopathic scoliosis, 645 were eligible for inclusion criteria. Three outcomes were distinguished in agreement with the SRS criteria: curve correction, curve stabilization and curve progression. Brace wearing was assessed by one orthopaedic surgeon (LA) and scored on a standardized form. Compliance to treatment was categorized as complete (brace worn as prescribed), incomplete A (brace removed for 1 month), incomplete B (brace removed for 2 months), incomplete C (brace removed during school hours), and incomplete D (brace worn overnight only). Chi square test, T test or ANOVA and ANOVA for repeated measures tests were used as statistical tests. Results The results from our study showed that at follow-up the compliance was: Complete 61.1%; Incomplete A 5.2%; Incomplete B 10.7%; Incomplete C 14.2%; Incomplete D 8.8%. Curve correction was accomplished in 301/319 of Complete, 19/27 Incomplete A, 25/56 Incomplete B, 52/74 Incomplete C, 27/46 Incomplete D. Cobb mean value was 29.8 ± 7.5 SD at beginning and 17.1 ± 10.9 SD at follow-up. Both Cobb and Perdriolle degree amelioration was significantly higher in patients with complete compliance over all other groups, both in juvenile, both in adolescent scoliosis. In the intention-to-treat analysis, the rate of surgical treatment was 2.1% among patients with definite outcome and 12.1% among those with drop-out. Treatment compliance showed significant interactions with time. Conclusion Curve progression and referral to surgery are lower in patients with high brace compliance. Bracing discontinuation up to 1 month does not impact on the treatment outcome. Conversely, wearing the brace only overnight is associated with a high rate of curve progression. PMID:24995038

  9. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  10. The use of intelligent database systems in acute pancreatitis--a systematic review.

    PubMed

    van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John

    2014-01-01

    Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  11. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  12. Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

    DTIC Science & Technology

    2015-07-01

    Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data Guy Van den Broeck∗ and Karthika Mohan∗ and Arthur Choi and Adnan ...notwithstanding any other provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does...Wasserman, L. (2011). All of Statistics. Springer Science & Business Media. Yaramakala, S., & Margaritis, D. (2005). Speculative markov blanket discovery for optimal feature selection. In Proceedings of ICDM.

  13. 17 CFR 240.6a-2 - Amendments to application.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... and effective date of the action taken and shall provide any new information and correct any... taken that renders inaccurate, or that causes to be incomplete, any of the following: (1) Information filed on the Execution Page of Form 1, or amendment thereto; or (2) Information filed as part of...

  14. 17 CFR 240.6a-2 - Amendments to application.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... and effective date of the action taken and shall provide any new information and correct any... taken that renders inaccurate, or that causes to be incomplete, any of the following: (1) Information filed on the Execution Page of Form 1, or amendment thereto; or (2) Information filed as part of...

  15. Updates to the Virtual Atomic and Molecular Data Centre

    NASA Astrophysics Data System (ADS)

    Hill, Christian; Tennyson, Jonathan; Gordon, Iouli E.; Rothman, Laurence S.; Dubernet, Marie-Lise

    2014-06-01

    The Virtual Atomic and Molecular Data Centre (VAMDC) has established a set of standards for the storage and transmission of atomic and molecular data and an SQL-based query language (VSS2) for searching online databases, known as nodes. The project has also created an online service, the VAMDC Portal, through which all of these databases may be searched and their results compared and aggregated. Since its inception four years ago, the VAMDC e-infrastructure has grown to encompass over 40 databases, including HITRAN, in more than 20 countries and engages actively with scientists in six continents. Associated with the portal are a growing suite of software tools for the transformation of data from its native, XML-based, XSAMS format, to a range of more convenient human-readable (such as HTML) and machinereadable (such as CSV) formats. The relational database for HITRAN1, created as part of the VAMDC project is a flexible and extensible data model which is able to represent a wider range of parameters than the current fixed-format text-based one. Over the next year, a new online interface to this database will be tested, released and fully documented - this web application, HITRANonline2, will fully replace the ageing and incomplete JavaHAWKS software suite.

  16. Language model: Extension to solve inconsistency, incompleteness, and short query in cultural heritage collection

    NASA Astrophysics Data System (ADS)

    Tan, Kian Lam; Lim, Chen Kim

    2017-10-01

    With the explosive growth of online information such as email messages, news articles, and scientific literature, many institutions and museums are converting their cultural collections from physical data to digital format. However, this conversion resulted in the issues of inconsistency and incompleteness. Besides, the usage of inaccurate keywords also resulted in short query problem. Most of the time, the inconsistency and incompleteness are caused by the aggregation fault in annotating a document itself while the short query problem is caused by naive user who has prior knowledge and experience in cultural heritage domain. In this paper, we presented an approach to solve the problem of inconsistency, incompleteness and short query by incorporating the Term Similarity Matrix into the Language Model. Our approach is tested on the Cultural Heritage in CLEF (CHiC) collection which consists of short queries and documents. The results show that the proposed approach is effective and has improved the accuracy in retrieval time.

  17. Decryption with incomplete cyphertext and multiple-information encryption in phase space.

    PubMed

    Xu, Xiaobin; Wu, Quanying; Liu, Jun; Situ, Guohai

    2016-01-25

    Recently, we have demonstrated that information encryption in phase space offers security enhancement over the traditional encryption schemes operating in real space. However, there is also an important issue with this technique: increasing the cost for data transmitting and storage. To address this issue, here we investigate the problem of decryption using incomplete cyphertext. We show that the analytic solution under the traditional framework set the lower limit of decryption performance. More importantly, we demonstrate that one just needs a small amount of cyphertext to recover the plaintext signal faithfully using compressive sensing, meaning that the amount of data that needs to transmit and store can be significantly reduced. This leads to multiple information encryption so that we can use the system bandwidth more effectively. We also provide an optical experimental result to demonstrate the plaintext recovered in phase space.

  18. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  19. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets

    PubMed Central

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662

  20. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    PubMed

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  1. Sequential estimation and satellite data assimilation in meteorology and oceanography

    NASA Technical Reports Server (NTRS)

    Ghil, M.

    1986-01-01

    The role of dynamics in estimating the state of the atmosphere and ocean from incomplete and noisy data is discussed and the classical applications of four-dimensional data assimilation to large-scale atmospheric dynamics are presented. It is concluded that sequential updating of a forecast model with continuously incoming conventional and remote-sensing data is the most natural way of extracting the maximum amount of information from the imperfectly known dynamics, on the one hand, and the inaccurate and incomplete observations, on the other.

  2. Identity method to study chemical fluctuations in relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazdzicki, Marek; Grebieszkow, Katarzyna; Mackowiak, Maja

    Event-by-event fluctuations of the chemical composition of the hadronic final state of relativistic heavy-ion collisions carry valuable information on the properties of strongly interacting matter produced in the collisions. However, in experiments incomplete particle identification distorts the observed fluctuation signals. The effect is quantitatively studied and a new technique for measuring chemical fluctuations, the identity method, is proposed. The method fully eliminates the effect of incomplete particle identification. The application of the identity method to experimental data is explained.

  3. Facility Volume and Survival in Nasopharyngeal Carcinoma.

    PubMed

    Yoshida, Emi J; Luu, Michael; David, John M; Kim, Sungjin; Mita, Alain; Scher, Kevin; Shiao, Stephen L; Tighiouart, Mourad; Lee, Nancy Y; Ho, Allen S; Zumsteg, Zachary S

    2018-02-01

    Definitive treatment of nasopharyngeal carcinoma (NPC) is challenging owing to its rarity, complicated regional anatomy, and the intensity of therapy. In contrast to other head and neck cancers, the effect of facility volume has not been well described for NPC. The National Cancer Database was queried for patients with stage II-IVB NPC diagnosed from 2004 to 2014 and treated with definitive radiation. Patients with incomplete staging, unknown receipt or timing of treatment, unknown follow-up duration, incomplete socioeconomic information, or treatment outside the reporting facility were excluded. High-volume facilities (HVFs) were defined as the top 5% of facilities according to the annual facility volume. The present analysis included 3941 NPC patients treated at 804 facilities with a median follow-up duration of 59.4 months, including 1025 patients (26.0%) treated at HVFs. Treatment at HVFs was associated with significantly improved overall survival (OS) on multivariable analysis (hazard ratio 0.79, 95% confidence interval 0.69-0.90; P=.001). In propensity score-matched cohorts, 5-year OS was 69.1% versus 63.3% at HVFs versus lower volume facilities (LVFs), respectively (P=.003). Similar results were seen when facility volume was analyzed as a continuous variable. The effect of facility volume on survival varied by academic status (P=.002 for interaction). At academic centers, the propensity score-matched cohorts had 5-year OS of 71.4% compared with 62.4% (P<.001) at HVFs and LVFs, respectively. In contrast, the 5-year OS was 63.5% versus 67.9% (P=.68) in propensity score-matched patients at nonacademic HVFs and LVFs. Treatment at HVFs was associated with improved OS for patients with NPC, with the effect exclusively seen at academic centers. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Population-based risk of complications following transthoracic needle lung biopsy of a pulmonary nodule

    PubMed Central

    Wiener, Renda Soylemez; Schwartz, Lisa M.; Woloshin, Steven; Welch, H. Gilbert

    2011-01-01

    Background Because pulmonary nodules are found in up to 25% of patients undergoing chest computed tomography, the question of whether to biopsy is becoming increasingly common. Data on complications following transthoracic needle lung biopsy are limited to case series from selected institutions. Objective To determine population-based estimates of risks of complications following transthoracic needle biopsy of a pulmonary nodule. Design Cross-sectional analysis. Setting The 2006 Healthcare Cost and Utilization Project’s State Ambulatory Surgery Databases and State Inpatient Databases for California, Florida, Michigan, and New York. Patients 15,865 adults who underwent transthoracic needle biopsy of a pulmonary nodule. Measurements Percent of biopsies complicated by hemorrhage, any pneumothorax, and pneumothorax requiring chest tube, and adjusted odds ratios for these complications associated with various biopsy characteristics, calculated using multivariable population-averaged generalized estimating equations. Results Although hemorrhage was rare, complicating 1.0% (95% CI 0.9-1.2%) of biopsies, 17.8% (95% CI 11.8-23.8%) of patients with hemorrhage required a blood transfusion. By contrast, the risk of any pneumothorax was 15.0% (95% CI 14.0-16.0%), and 6.6% (95% CI 6.0-7.2%) of all biopsies resulted in a pneumothorax requiring chest tube. Compared to patients without complications, those who experienced hemorrhage or pneumothorax requiring chest tube had longer lengths of stay (p<0.001) and were more likely to develop respiratory failure requiring mechanical ventilation (p=0.02). Patients aged 60-69 years (as opposed to younger or older patients), smokers, and those with chronic obstructive pulmonary disease had higher risk of complications. Limitations Estimated risks may be inaccurate if coding of complications is incomplete. The databases analyzed contain little clinical detail (e.g., nodule characteristics, biopsy pathology) and cannot determine whether biopsies produced useful information. Conclusion While hemorrhage is an infrequent complication of transthoracic needle lung biopsy, pneumothorax is common and often necessitates chest tube placement. These population-based data should help patients and doctors make a more informed choice on whether to biopsy a pulmonary nodule. Primary Funding Source Department of Veterans Affairs and National Cancer Institute K07 CA 138772 PMID:21810706

  5. 76 FR 43143 - Approval and Promulgation of Air Quality Implementation Plan; Kansas; Final Disapproval of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ..., some information is not publicly available, i.e., Confidential Business Information (CBI) or other... business are Monday through Friday, 8 to 4:30, excluding Federal holidays. FOR FURTHER INFORMATION CONTACT... unfair disadvantage by not finding the submittal incomplete instead of issuing a proposed disapproval is...

  6. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  7. 7 CFR 1709.108 - Supporting data for determining community eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... areas. A comparison of the historical residential energy cost or expenditure information for the local... lieu of historical residential energy costs or expenditure information under the following circumstances: (1) Where historical community energy cost data are unavailable (unserved areas), incomplete or...

  8. Measuring treatment compliance of men with non-gonococcal urethritis receiving oxytetracycline combined with low dose phenobarbitone.

    PubMed Central

    Bignell, C J; Mulcahy, F M; Peaker, S; Pullar, T; Feely, M P

    1988-01-01

    Of 62 men with non-gonococcal urethritis who entered a study to assess compliance with treatment with oxytetracycline, only 33 could be evaluated. Traditional methods (interview and the absence of oxytetracycline in the urine) showed incomplete compliance in nine. Use of low dose phenobarbitone as a pharmacological marker showed incomplete compliance in a further five patients. In addition, phenobarbitone concentrations gave information on the extent to which individual patients had omitted treatment and provided direct, as opposed to circumstantial, evidence of good compliance by most (18) of those studied. Only three of the 33 patients whose compliance was assessed had evidence of continuing infection at follow up, and there was evidence of incomplete compliance in only one of these patients. PMID:3203931

  9. 16 CFR 1102.6 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...

  10. Acupuncture and vitamin B12 injection for Bell's palsy: no high-quality evidence exists.

    PubMed

    Wang, Li-Li; Guan, Ling; Hao, Peng-Liang; Du, Jin-Long; Zhang, Meng-Xue

    2015-05-01

    To assess the efficacy of acupuncture combined with vitamin B12 acupoint injection versus acupuncture alone to reduce incomplete recovery in patients with Bell's palsy. A computer-based online retrieval of Medline, Web of Science, CNKI, CBM databases until April 2014 was performed for relevant trials, using the key words "Bell's palsy or idiopathic facial palsy or facial palsy" and "acupuncture or vitamin B12 or methylcobalamin". All randomized controlled trials that compared acupuncture with acupuncture combined with vitamin B12 in patients with Bell's palsy were included in the meta-analysis. The initial treatment lasted for at least 4 weeks. The outcomes of incomplete facial recovery were monitored. The scoring index varied and the definition of healing was consistent. The combined effect size was calculated by using relative risk (RR) with 95% confidence interval (CI) using the fixed effect model of Review Manager. Incomplete recovery rates were chosen as the primary outcome. Five studies involving 344 patients were included in the final analysis. Results showed that the incomplete recovery rate of Bell's palsy patients was 44.50% in the acupuncture combined with vitamin B12 group but 62.57% in the acupuncture alone group. The major acupoints were Taiyang (EX-HN5), Jiache (ST6), Dicang (ST4) and Sibai (ST2). The combined effect size showed that acupuncture combined with vitamin B12 was better than acupuncture alone for the treatment of Bell's palsy (RR = 0.71, 95%CI: 0.58-0.87; P = 0.001), this result held true when 8 patients lost to follow up in one study were included into the analyses (RR = 0.70, 95%CI: 0.58-0.86; P = 0.0005). In the subgroup analyses, the therapeutic effect in patients of the electroacupuncture subgroup was better than in the non-electroacupuncture subgroup (P = 0.024). There was no significant difference in the incomplete recovery rate by subgroup analysis on drug types and treatment period. Most of the included studies were moderate or low quality, and bias existed. In patients with Bell's palsy, acupuncture combined with vitamin B12 can reduce the risk of incomplete recovery compared with acupuncture alone in our meta-analysis. Because of study bias and methodological limitations, this conclusion is uncertain and the clinical application of acupuncture combined with vitamin B12 requires further exploration.

  11. Paperless anesthesia: uses and abuses of these data.

    PubMed

    Anderson, Brian J; Merry, Alan F

    2015-12-01

    Demonstrably accurate records facilitate clinical decision making, improve patient safety, provide better defense against frivolous lawsuits, and enable better medical policy decisions. Anesthesia Information Management Systems (AIMS) have the potential to improve on the accuracy and reliability of handwritten records. Interfaces with electronic recording systems within the hospital or wider community allow correlation of anesthesia relevant data with biochemistry laboratory results, billing sections, radiological units, pharmacy, earlier patient records, and other systems. Electronic storage of large and accurate datasets has lent itself to quality assurance, enhancement of patient safety, research, cost containment, scheduling, anesthesia training initiatives, and has even stimulated organizational change. The time for record making may be increased by AIMS, but in some cases has been reduced. The question of impact on vigilance is not entirely settled, but substantial negative effects seem to be unlikely. The usefulness of these large databases depends on the accuracy of data and they may be incorrect or incomplete. Consequent biases are threats to the validity of research results. Data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate misleading research findings for the purpose of manipulating public opinion and swaying policymakers. There remains a fear that accessibility of data may have undesirable regulatory or legal consequences. Increasing regulation of treatment options during the perioperative period through regulated policies could reduce autonomy for clinicians. These fears are as yet unsubstantiated. © 2015 John Wiley & Sons Ltd.

  12. The Estimation of Gestational Age at Birth in Database Studies.

    PubMed

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  13. A major allergen in rainbow trout (Oncorhynchus mykiss): complete sequences of parvalbumin by MALDI tandem mass spectrometry.

    PubMed

    Aiello, Donatella; Materazzi, Stefano; Risoluti, Roberta; Thangavel, Hariprasad; Di Donna, Leonardo; Mazzotti, Fabio; Casadonte, Francesca; Siciliano, Carlo; Sindona, Giovanni; Napoli, Anna

    2015-08-01

    Fish parvalbumin (PRVB) is an abundant and stable protein in fish meat. The variation in cross-reactivity among individuals is well known and explained by a broad repertoire of molecular forms and differences between IgE-binding epitopes in fish species. PVRB has "sequential" epitopes, which retain their IgE-binding capacity and allergenicity also after heating and digestion using proteolytic enzymes. From the allergonomics perspective, PRVB is still a challenging target due to its multiple isoforms present at different degrees of distribution. Little information is available in the databases about PVRBs from Oncorhynchus mykiss. At present, only two validated, incomplete isoforms of this species are included in the protein databases: parvalbumin beta 1 (P86431) and parvalbumin beta 2 (P86432). A simple and rapid protocol has been developed for selective solubilization of PRVB from the muscle of farmed rainbow trout (Oncorhynchus mykiss), followed by calcium depletion, proteolytic digestion, MALDI MS, and MS/MS analysis. With this strategy thermal allergen release was assessed and PRVB1 (P86431), PRVB1.1, PRVB2 (P86432) and PRVB2.1 variants from the rainbow trout were sequenced. The correct ordering of peptide sequences was aided by mapping the overlapping enzymatic digests. The deduced peptide sequences were arranged and the theoretical molecular masses (Mr) of the resulting sequences were calculated. Experimental masses (Mr) of each PRVB variant were measured by linear MALDI-TOF.

  14. Re-cataloging Joint Astronomy Centre (JAC) Library Book Collection

    NASA Astrophysics Data System (ADS)

    Lucas, A.; Zhang, X.

    2007-10-01

    The Joint Astronomy Centre operates two telescopes: the James Clerk Maxwell Telescope and the United Kingdom Infrared Telescope. In the JAC's 25-year history, their library was maintained by a number of staff ranging from scientists to student assistants. This resulted in an inconsistent and incomplete catalog as well as a mixture of typed, hand written, and inaccurate call number labels. Further complicating the situation was a backlog of un-cataloged books. In the process of improving the library system, it became obvious that the entire book collection needed to be re-cataloged and re-labeled. Readerware proved to be an inexpensive and efficient tool for this project. The software allows for the scanning of barcodes or the manual input of ISBNs, LCCNs and UPCs. It then retrieves the cataloging records from a number of pre-selected websites. The merged information is then stored in a database that can be manipulated to perform tasks such as printing call number labels. Readerware is also ideal for copy cataloging and has become an indispensable tool in maintaining the JAC's collection of books.

  15. Parents' difficulties with decisions about childhood immunisation.

    PubMed

    Austin, Helen; Campion-Smith, Charles; Thomas, Sarah; Ward, William

    2008-10-01

    Uptake of childhood immunisation fluctuates in the UK. Convenience, access and parents' relationships with professionals influence uptake. This study explores the decision-making by parents about their children's immunisation through focus groups with analysis to identify categories of concern. Issues raised in focus groups included fear, risk, anger, worry and guilt, confusion, difficulty of decision-making and trust of professionals. The parents of completely and incompletely immunised children shared areas of concern, but there were also significant differences. There was a subset of parents of incompletely immunised children who had decided that their children would not have full immunisation, and this group had little trust in information provided by healthcare professionals. Simply providing more information is unlikely to change their decision.

  16. Examining the Use of Internal Defect Information for Information-Augmented Hardwood Log Breakdown

    Treesearch

    Luis G. Occeña; Daniel L. Schmoldt; Suraphan Thawornwong

    1997-01-01

    In present-day hardwood sawmills, log breakdown is hampered by incomplete information about log geometry and internal features. When internal log scanning becomes operational, it will remove this roadblock and provide a complete view of each logâs interior. It is not currently obvious, however, how dramatically this increased level of information will improve log...

  17. Gap-filling a spatially explicit plant trait database: comparing imputation methods and different levels of environmental information

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2018-05-01

    The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in the IEFC incomplete dataset (5495 plots) and quantify imputation uncertainty. Resulting spatial patterns of the studied traits in Catalan forests were broadly similar when using species means, regression kriging or the best-performing MICE application, but some important discrepancies were observed at the local level. Our results highlight the need to assess imputation quality beyond just imputation accuracy and show that including environmental information in statistical imputation approaches yields more plausible imputations in spatially explicit plant trait datasets.

  18. Validity of data in the Danish Colorectal Cancer Screening Database

    PubMed Central

    Thomsen, Mette Kielsholm; Njor, Sisse Helle; Rasmussen, Morten; Linnemann, Dorte; Andersen, Berit; Baatrup, Gunnar; Friis-Hansen, Lennart Jan; Jørgensen, Jens Christian Riis; Mikkelsen, Ellen Margrethe

    2017-01-01

    Background In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. Objective The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). Methods All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes “clean colon”, “colonoscopy performed”, “overall completeness of colonoscopy”, “incomplete colonoscopy”, “polypectomy”, “tumor tissue left behind”, “number of polyps”, “lost polyps”, “risk group of polyps” and “colorectal cancer and polyps/benign tumor”. Results Hospital records were available for 136 individuals. Agreement was highest for “colorectal cancer” (97.1%) and lowest for “lost polyps” (88.2%). Sensitivity varied between moderate and high, with 60.0% for “incomplete colonoscopy” and 98.5% for “colonoscopy performed”. Specificity was 92.7% or above, except for the categories “colonoscopy performed” and “overall completeness of colonoscopy”, where the specificity was low; however, the estimates were imprecise. Conclusion A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening. PMID:28255255

  19. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    PubMed

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  20. Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach

    PubMed Central

    2012-01-01

    Background Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. Methods We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. Results We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. Conclusions We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data. PMID:22759462

  1. Event extraction of bacteria biotopes: a knowledge-intensive NLP-based approach.

    PubMed

    Ratkovic, Zorana; Golik, Wiktoria; Warnier, Pierre

    2012-06-26

    Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data.

  2. Estimating inbreeding rates in natural populations: Addressing the problem of incomplete pedigrees

    USGS Publications Warehouse

    Miller, Mark P.; Haig, Susan M.; Ballou, Jonathan D.; Steel, E. Ashley

    2017-01-01

    Understanding and estimating inbreeding is essential for managing threatened and endangered wildlife populations. However, determination of inbreeding rates in natural populations is confounded by incomplete parentage information. We present an approach for quantifying inbreeding rates for populations with incomplete parentage information. The approach exploits knowledge of pedigree configurations that lead to inbreeding coefficients of F = 0.25 and F = 0.125, allowing for quantification of Pr(I|k): the probability of observing pedigree I given the fraction of known parents (k). We developed analytical expressions under simplifying assumptions that define properties and behavior of inbreeding rate estimators for varying values of k. We demonstrated that inbreeding is overestimated if Pr(I|k) is not taken into consideration and that bias is primarily influenced by k. By contrast, our new estimator, incorporating Pr(I|k), is unbiased over a wide range of values of kthat may be observed in empirical studies. Stochastic computer simulations that allowed complex inter- and intragenerational inbreeding produced similar results. We illustrate the effects that accounting for Pr(I|k) can have in empirical data by revisiting published analyses of Arabian oryx (Oryx leucoryx) and Red deer (Cervus elaphus). Our results demonstrate that incomplete pedigrees are not barriers for quantifying inbreeding in wild populations. Application of our approach will permit a better understanding of the role that inbreeding plays in the dynamics of populations of threatened and endangered species and may help refine our understanding of inbreeding avoidance mechanisms in the wild.

  3. Handling incomplete correlated continuous and binary outcomes in meta-analysis of individual participant data.

    PubMed

    Gomes, Manuel; Hatfield, Laura; Normand, Sharon-Lise

    2016-09-20

    Meta-analysis of individual participant data (IPD) is increasingly utilised to improve the estimation of treatment effects, particularly among different participant subgroups. An important concern in IPD meta-analysis relates to partially or completely missing outcomes for some studies, a problem exacerbated when interest is on multiple discrete and continuous outcomes. When leveraging information from incomplete correlated outcomes across studies, the fully observed outcomes may provide important information about the incompleteness of the other outcomes. In this paper, we compare two models for handling incomplete continuous and binary outcomes in IPD meta-analysis: a joint hierarchical model and a sequence of full conditional mixed models. We illustrate how these approaches incorporate the correlation across the multiple outcomes and the between-study heterogeneity when addressing the missing data. Simulations characterise the performance of the methods across a range of scenarios which differ according to the proportion and type of missingness, strength of correlation between outcomes and the number of studies. The joint model provided confidence interval coverage consistently closer to nominal levels and lower mean squared error compared with the fully conditional approach across the scenarios considered. Methods are illustrated in a meta-analysis of randomised controlled trials comparing the effectiveness of implantable cardioverter-defibrillator devices alone to implantable cardioverter-defibrillator combined with cardiac resynchronisation therapy for treating patients with chronic heart failure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Optimal (R, Q) policy and pricing for two-echelon supply chain with lead time and retailer's service-level incomplete information

    NASA Astrophysics Data System (ADS)

    Esmaeili, M.; Naghavi, M. S.; Ghahghaei, A.

    2018-03-01

    Many studies focus on inventory systems to analyze different real-world situations. This paper considers a two-echelon supply chain that includes one warehouse and one retailer with stochastic demand and an up-to-level policy. The retailer's lead time includes the transportation time from the warehouse to the retailer that is unknown to the retailer. On the other hand, the warehouse is unaware of retailer's service level. The relationship between the retailer and the warehouse is modeled based on the Stackelberg game with incomplete information. Moreover, their relationship is presented when the warehouse and the retailer reveal their private information using the incentive strategies. The optimal inventory and pricing policies are obtained using an algorithm based on bi-level programming. Numerical examples, including sensitivity analysis of some key parameters, will compare the results between the Stackelberg models. The results show that information sharing is more beneficial to the warehouse rather than the retailer.

  5. Avoidable waste related to inadequate methods and incomplete reporting of interventions: a systematic review of randomized trials performed in Sub-Saharan Africa.

    PubMed

    Ndounga Diakou, Lee Aymar; Ntoumi, Francine; Ravaud, Philippe; Boutron, Isabelle

    2017-07-05

    Randomized controlled trials (RCTs) are needed to improve health care in Sub-Saharan Africa (SSA). However, inadequate methods and incomplete reporting of interventions can prevent the transposition of research in practice which leads waste of research. The aim of this systematic review was to assess the avoidable waste in research related to inadequate methods and incomplete reporting of interventions in RCTs performed in SSA. We performed a methodological systematic review of RCTs performed in SSA and published between 1 January 2014 and 31 March 2015. We searched PubMed, the Cochrane library and the African Index Medicus to identify reports. We assessed the risk of bias using the Cochrane Risk of Bias tool, and for each risk of bias item, determined whether easy adjustments with no or minor cost could change the domain to low risk of bias. The reporting of interventions was assessed by using standardized checklists based on the Consolidated Standards for Reporting Trials, and core items of the Template for Intervention Description and Replication. Corresponding authors of reports with incomplete reporting of interventions were contacted to obtain additional information. Data were descriptively analyzed. Among 121 RCTs selected, 74 (61%) evaluated pharmacological treatments (PTs), including drugs and nutritional supplements; and 47 (39%) nonpharmacological treatments (NPTs) (40 participative interventions, 1 surgical procedure, 3 medical devices and 3 therapeutic strategies). Overall, the randomization sequence was adequately generated in 76 reports (62%) and the intervention allocation concealed in 48 (39%). The primary outcome was described as blinded in 46 reports (38%), and incomplete outcome data were adequately addressed in 78 (64%). Applying easy methodological adjustments with no or minor additional cost to trials with at least one domain at high risk of bias could have reduced the number of domains at high risk for 24 RCTs (19%). Interventions were completely reported for 73/121 (60%) RCTs: 51/74 (68%) of PTs and 22/47 (46%) of NPTs. Additional information was obtained from corresponding authors for 11/48 reports (22%). Inadequate methods and incomplete reporting of published SSA RCTs could be improved by easy and inexpensive methodological adjustments and adherence to reporting guidelines.

  6. Empirical analysis shows reduced cost data collection may be an efficient method in economic clinical trials

    PubMed Central

    2012-01-01

    Background Data collection for economic evaluation alongside clinical trials is burdensome and cost-intensive. Limiting both the frequency of data collection and recall periods can solve the problem. As a consequence, gaps in survey periods arise and must be filled appropriately. The aims of our study are to assess the validity of incomplete cost data collection and define suitable resource categories. Methods In the randomised KORINNA study, cost data from 234 elderly patients were collected quarterly over a 1-year period. Different strategies for incomplete data collection were compared with complete data collection. The sample size calculation was modified in response to elasticity of variance. Results Resource categories suitable for incomplete data collection were physiotherapy, ambulatory clinic in hospital, medication, consultations, outpatient nursing service and paid household help. Cost estimation from complete and incomplete data collection showed no difference when omitting information from one quarter. When omitting information from two quarters, costs were underestimated by 3.9% to 4.6%. With respect to the observed increased standard deviation, a larger sample size would be required, increased by 3%. Nevertheless, more time was saved than extra time would be required for additional patients. Conclusion Cost data can be collected efficiently by reducing the frequency of data collection. This can be achieved by incomplete data collection for shortened periods or complete data collection by extending recall windows. In our analysis, cost estimates per year for ambulatory healthcare and non-healthcare services in terms of three data collections was as valid and accurate as a four complete data collections. In contrast, data on hospitalisation, rehabilitation stays and care insurance benefits should be collected for the entire target period, using extended recall windows. When applying the method of incomplete data collection, sample size calculation has to be modified because of the increased standard deviation. This approach is suitable to enable economic evaluation with lower costs to both study participants and investigators. Trial registration The trial registration number is ISRCTN02893746 PMID:22978572

  7. Effects of increased nurses' workload on quality documentation of patient information at selected Primary Health Care facilities in Vhembe District, Limpopo Province.

    PubMed

    Shihundla, Rhulani C; Lebese, Rachel T; Maputle, Maria S

    2016-05-13

    Recording of information on multiple documents increases professional nurses' responsibilities and workload during working hours. There are multiple registers and books at Primary Health Care (PHC) facilities in which a patient's information is to be recorded for different services during a visit to a health professional. Antenatal patients coming for the first visit must be recorded in the following documents: tick register; Prevention of Mother-ToChild Transmission (PMTCT) register; consent form for HIV and AIDS testing; HIV Counselling and Testing (HCT) register (if tested positive for HIV and AIDS then this must be recorded in the Antiretroviral Therapy (ART) wellness register); ART file with an accompanying single file, completion of which is time-consuming; tuberculosis (TB) suspects register; blood specimen register; maternity case record book and Basic Antenatal Care (BANC) checklist. Nurses forget to record information in some documents which leads to the omission of important data. Omitting information might lead to mismanagement of patients. Some of the documents have incomplete and inaccurate information. As PHC facilities in Vhembe District render twenty four hour services through a call system, the same nurses are expected to resume duty at 07:00 the following morning. They are expected to work effectively and when tired a nurse may record illegible information which may cause problems when the document is retrieved by the next person for continuity of care. The objective of this study was to investigate and describe the effects of increased nurses' workload on quality documentation of patient information at PHC facilities in Vhembe District, Limpopo Province. The study was conducted in Vhembe District, Limpopo Province, where the effects of increased nurses' workload on quality documentation of information is currently experienced. The research design was explorative, descriptive and contextual in nature. The population consisted of all nurses who work at PHC facilities in Vhembe District. Purposive sampling was used to select nurses and three professional nurses were sampled from each PHC facility. An in-depth face-to-face interview was used to collect data using an interview guide. PHC facilities encountered several effects due to increased nurses' workload where incomplete patient information is documented. Unavailability of patient information was observed, whilst some documented information was found to be illegible, inaccurate and incomplete. Documentation of information at PHC facilities is an evidence of effective communication amongst professional nurses. There should always be active follow-up and mentoring of the nurses' documentation to ensure that information is accurately and fully documented in their respective facilities. Nurses find it difficult to cope with the increased workload associated with documenting patient information on the multiple records that are utilized at PHC facilities, leading to incomplete information. The number of nurses at facilities should be increased to reduce the increased workload.

  8. Improvement of medication event interventions through use of an electronic database.

    PubMed

    Merandi, Jenna; Morvay, Shelly; Lewe, Dorcas; Stewart, Barb; Catt, Char; Chanthasene, Phillip P; McClead, Richard; Kappeler, Karl; Mirtallo, Jay M

    2013-10-01

    Patient safety enhancements achieved through the use of an electronic Web-based system for responding to adverse drug events (ADEs) are described. A two-phase initiative was carried out at an academic pediatric hospital to improve processes related to "medication event huddles" (interdisciplinary meetings focused on ADE interventions). Phase 1 of the initiative entailed a review of huddles and interventions over a 16-month baseline period during which multiple databases were used to manage the huddle process and staff interventions were assigned via manually generated e-mail reminders. Phase 1 data collection included ADE details (e.g., medications and staff involved, location and date of event) and the types and frequencies of interventions. Based on the phase 1 analysis, an electronic database was created to eliminate the use of multiple systems for huddle scheduling and documentation and to automatically generate e-mail reminders on assigned interventions. In phase 2 of the initiative, the impact of the database during a 5-month period was evaluated; the primary outcome was the percentage of interventions documented as completed after database implementation. During the postimplementation period, 44.7% of assigned interventions were completed, compared with a completion rate of 21% during the preimplementation period, and interventions documented as incomplete decreased from 77% to 43.7% (p < 0.0001). Process changes, education, and medication order improvements were the most frequently documented categories of interventions. Implementation of a user-friendly electronic database improved intervention completion and documentation after medication event huddles.

  9. Evaluation of personal digital assistant drug information databases for the managed care pharmacist.

    PubMed

    Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W

    2003-01-01

    Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.

  10. 49 CFR 555.12 - Petition for exemption.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... under the laws of which it is organized, and the name of each alterer, incomplete vehicle manufacturer... sought; (g) Specify any part of the information and data submitted that the petitioner requests be...). (1) The information and data which petitioner requests be withheld from public disclosure must be...

  11. 49 CFR 555.12 - Petition for exemption.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... under the laws of which it is organized, and the name of each alterer, incomplete vehicle manufacturer... sought; (g) Specify any part of the information and data submitted that the petitioner requests be...). (1) The information and data which petitioner requests be withheld from public disclosure must be...

  12. 49 CFR 555.12 - Petition for exemption.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... under the laws of which it is organized, and the name of each alterer, incomplete vehicle manufacturer... sought; (g) Specify any part of the information and data submitted that the petitioner requests be...). (1) The information and data which petitioner requests be withheld from public disclosure must be...

  13. 49 CFR 555.12 - Petition for exemption.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... under the laws of which it is organized, and the name of each alterer, incomplete vehicle manufacturer... sought; (g) Specify any part of the information and data submitted that the petitioner requests be...). (1) The information and data which petitioner requests be withheld from public disclosure must be...

  14. 49 CFR 555.12 - Petition for exemption.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... under the laws of which it is organized, and the name of each alterer, incomplete vehicle manufacturer... sought; (g) Specify any part of the information and data submitted that the petitioner requests be...). (1) The information and data which petitioner requests be withheld from public disclosure must be...

  15. 43 CFR 46.125 - Incomplete or unavailable information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....125 Section 46.125 Public Lands: Interior Office of the Secretary of the Interior IMPLEMENTATION OF... apply, bureaus must consider all costs to obtain information. These costs include monetary costs as well as other non-monetized costs when appropriate, such as social costs, delays, opportunity costs, and...

  16. Fast angular synchronization for phase retrieval via incomplete information

    NASA Astrophysics Data System (ADS)

    Viswanathan, Aditya; Iwen, Mark

    2015-08-01

    We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.

  17. Measuring the impact of Health Trainers Services on health and health inequalities: does the service's data collection and reporting system provide reliable information?

    PubMed

    Mathers, Jonathan; Taylor, Rebecca; Parry, Jayne

    2017-03-01

    The Health Trainers Service is one of the few public health policies where a bespoke database-the Data Collection and Reporting System (DCRS)-was developed to monitor performance. We seek to understand the context within which local services and staff have used the DCRS and to consider how this might influence interpretation of collected data. In-depth case studies of six local services purposively sampled to represent the range of service provider arrangements, including detailed interviews with key stakeholders (n = 118). Capturing detailed information on activity with clients was alien to many health trainers' work practices. This related to technical challenges, but it also ran counter to beliefs as to how a 'lay' service would operate. Interviewees noted the inadequacy of the dataset to capture all client impacts; that is, it did not enable them to input information about issues a client living in a deprived neighbourhood might experience and seek help to address. The utility of the DCRS may be compromised both by incomplete ascertainment of activity and by incorrect data inputted by some Health Trainers. The DCRS is also underestimate the effectiveness of work health trainers have undertaken to address 'upstream' factors affecting client health. © The Author 2016. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. SigmoID: a user-friendly tool for improving bacterial genome annotation through analysis of transcription control signals

    PubMed Central

    Damienikan, Aliaksandr U.

    2016-01-01

    The majority of bacterial genome annotations are currently automated and based on a ‘gene by gene’ approach. Regulatory signals and operon structures are rarely taken into account which often results in incomplete and even incorrect gene function assignments. Here we present SigmoID, a cross-platform (OS X, Linux and Windows) open-source application aiming at simplifying the identification of transcription regulatory sites (promoters, transcription factor binding sites and terminators) in bacterial genomes and providing assistance in correcting annotations in accordance with regulatory information. SigmoID combines a user-friendly graphical interface to well known command line tools with a genome browser for visualising regulatory elements in genomic context. Integrated access to online databases with regulatory information (RegPrecise and RegulonDB) and web-based search engines speeds up genome analysis and simplifies correction of genome annotation. We demonstrate some features of SigmoID by constructing a series of regulatory protein binding site profiles for two groups of bacteria: Soft Rot Enterobacteriaceae (Pectobacterium and Dickeya spp.) and Pseudomonas spp. Furthermore, we inferred over 900 transcription factor binding sites and alternative sigma factor promoters in the annotated genome of Pectobacterium atrosepticum. These regulatory signals control putative transcription units covering about 40% of the P. atrosepticum chromosome. Reviewing the annotation in cases where it didn’t fit with regulatory information allowed us to correct product and gene names for over 300 loci. PMID:27257541

  19. When good is stickier than bad: Understanding gain/loss asymmetries in sequential framing effects.

    PubMed

    Sparks, Jehan; Ledgerwood, Alison

    2017-08-01

    Considerable research has demonstrated the power of the current positive or negative frame to shape people's current judgments. But humans must often learn about positive and negative information as they encounter that information sequentially over time. It is therefore crucial to consider the potential importance of sequencing when developing an understanding of how humans think about valenced information. Indeed, recent work looking at sequentially encountered frames suggests that some frames can linger outside the context in which they are first encountered, sticking in the mind so that subsequent frames have a muted effect. The present research builds a comprehensive account of sequential framing effects in both the loss and the gain domains. After seeing information about a potential gain or loss framed in positive terms or negative terms, participants saw the same issue reframed in the opposing way. Across 5 studies and 1566 participants, we find accumulating evidence for the notion that in the gain domain, positive frames are stickier than negative frames for novel but not familiar scenarios, whereas in the loss domain, negative frames are always stickier than positive frames. Integrating regulatory focus theory with the literatures on negativity dominance and positivity offset, we develop a new and comprehensive account of sequential framing effects that emphasizes the adaptive value of positivity and negativity biases in specific contexts. Our findings highlight the fact that research conducted solely in the loss domain risks painting an incomplete and oversimplified picture of human bias and suggest new directions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Developing a fluid intelligence scale through a combination of Rasch modeling and cognitive psychology.

    PubMed

    Primi, Ricardo

    2014-09-01

    Ability testing has been criticized because understanding of the construct being assessed is incomplete and because the testing has not yet been satisfactorily improved in accordance with new knowledge from cognitive psychology. This article contributes to the solution of this problem through the application of item response theory and Susan Embretson's cognitive design system for test development in the development of a fluid intelligence scale. This study is based on findings from cognitive psychology; instead of focusing on the development of a test, it focuses on the definition of a variable for the creation of a criterion-referenced measure for fluid intelligence. A geometric matrix item bank with 26 items was analyzed with data from 2,797 undergraduate students. The main result was a criterion-referenced scale that was based on information from item features that were linked to cognitive components, such as storage capacity, goal management, and abstraction; this information was used to create the descriptions of selected levels of a fluid intelligence scale. The scale proposed that the levels of fluid intelligence range from the ability to solve problems containing a limited number of bits of information with obvious relationships through the ability to solve problems that involve abstract relationships under conditions that are confounded with an information overload and distraction by mixed noise. This scale can be employed in future research to provide interpretations for the measurements of the cognitive processes mastered and the types of difficulty experienced by examinees. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Prediction of glutathionylation sites in proteins using minimal sequence information and their experimental validation.

    PubMed

    Pal, Debojyoti; Sharma, Deepak; Kumar, Mukesh; Sandur, Santosh K

    2016-09-01

    S-glutathionylation of proteins plays an important role in various biological processes and is known to be protective modification during oxidative stress. Since, experimental detection of S-glutathionylation is labor intensive and time consuming, bioinformatics based approach is a viable alternative. Available methods require relatively longer sequence information, which may prevent prediction if sequence information is incomplete. Here, we present a model to predict glutathionylation sites from pentapeptide sequences. It is based upon differential association of amino acids with glutathionylated and non-glutathionylated cysteines from a database of experimentally verified sequences. This data was used to calculate position dependent F-scores, which measure how a particular amino acid at a particular position may affect the likelihood of glutathionylation event. Glutathionylation-score (G-score), indicating propensity of a sequence to undergo glutathionylation, was calculated using position-dependent F-scores for each amino-acid. Cut-off values were used for prediction. Our model returned an accuracy of 58% with Matthew's correlation-coefficient (MCC) value of 0.165. On an independent dataset, our model outperformed the currently available model, in spite of needing much less sequence information. Pentapeptide motifs having high abundance among glutathionylated proteins were identified. A list of potential glutathionylation hotspot sequences were obtained by assigning G-scores and subsequent Protein-BLAST analysis revealed a total of 254 putative glutathionable proteins, a number of which were already known to be glutathionylated. Our model predicted glutathionylation sites in 93.93% of experimentally verified glutathionylated proteins. Outcome of this study may assist in discovering novel glutathionylation sites and finding candidate proteins for glutathionylation.

  2. An evaluation of emergency guidelines issued by the World Health Organization in response to four infectious disease outbreaks.

    PubMed

    Norris, Susan L; Sawin, Veronica Ivey; Ferri, Mauricio; Raques Sastre, Laura; Porgo, Teegwendé V

    2018-01-01

    The production of high-quality guidelines in response to public health emergencies poses challenges for the World Health Organization (WHO). The urgent need for guidance and the paucity of structured scientific data on emerging diseases hinder the formulation of evidence-informed recommendations using standard methods and procedures. In the context of the response to recent public health emergencies, this project aimed to describe the information products produced by WHO and assess the quality and trustworthiness of a subset of these products classified as guidelines. We selected four recent infectious disease emergencies: outbreaks of avian influenza A-H1N1 virus (2009) and H7N9 virus (2013), Middle East respiratory syndrome coronavirus (MERS-CoV) (2013), and Ebola virus disease (EVD) (2014 to 2016). We analyzed the development and publication processes and evaluated the quality of emergency guidelines using AGREE-II. We included 175 information products of which 87 were guidelines. These products demonstrated variable adherence to WHO publication requirements including the listing of external contributors, management of declarations of interest, and entry into WHO's public database of publications. For guidelines, the methods for development were incompletely reported; WHO's quality assurance process was rarely used; systematic or other evidence reviews were infrequently referenced; external peer review was not performed; and they scored poorly with AGREE II, particularly for rigour of development and editorial independence. Our study suggests that WHO guidelines produced in the context of a public health emergency can be improved upon, helping to assure the trustworthiness and utility of WHO information products in future emergencies.

  3. [Comparative analysis of quality labels of health websites].

    PubMed

    Padilla-Garrido, N; Aguado-Correa, F; Huelva-López, L; Ortega-Moreno, M

    2016-01-01

    The search for health related information on the Internet is a growing phenomenon, buts its main drawback is the lack of reliability of information consulted. The aim of this study was to analyse and compare existing quality labels of health websites. A cross-sectional study was performed by searching Medline, IBECS, Google, and Yahoo, in both English and Spanish, between 8 and 9 March, 2015. Different keywords were used depending on whether the search was conducted in medical databases or generic search engines. The quality labels were classified according to their origin, analysing their character, year of implementation, the existence of the accreditation process, number of categories, criteria and standards, possibility of self-assessment, number of levels of certification, certification scope, validity, analytical quality of content, fee, results of the accreditation process, application and number of websites granted the seal, and quality labels obtained by the accrediting organisation. Seven quality labels, five of Spanish origin (WMA, PAWS, WIS, SEAFORMEC and M21) and two international ones (HONcode and Health Web Site Accreditation), were analysed. There was disparity in carrying out the accreditation process, with some not detailing key aspects of the process, or providing incomplete, outdated, or even inaccurate information. The most rigorous guaranteed the level of confidence that the websites had in relation to the content of information, but none checked the quality of them. Although rigorous quality labels may become useful, the deficiencies in some of them cast doubt on their current usefulness. Copyright © 2015 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  5. [Prenatal patient cards and quality of prenatal care in public health services in Greater Metropolitan Vitória, Espírito Santo State, Brazil].

    PubMed

    Santos Neto, Edson Theodoro dos; Oliveira, Adauto Emmerich; Zandonade, Eliana; Gama, Silvana Granado Nogueira da; Leal, Maria do Carmo

    2012-09-01

    This study aimed to assess the completeness of prenatal care information on the patients' prenatal care cards, according to coverage by various public health services: Family Health Strategy (FHS), Community-Based Health Workers' Program (CBHWP), and traditional Primary Care Units (PCU) in Greater Metropolitan Vitória, Espírito Santo State, Brazil. In a cross-sectional study, 1,006 prenatal cards were randomly selected from postpartum women at maternity hospitals in the metropolitan area. Completeness of the cards was assessed according to the criteria proposed by Romero & Cunha, which measure the quality on a scale from excellent (< 5% incomplete cards) to very bad (> 50% incomplete cards). In general, completion of information on the cards was bad (> 20% incomplete), but cards were filled out better in the FHS than in the CBHWP and PCU, especially for tetanus vaccination (p = 0.016) and gestational weight (p = 0.039). In conclusion, the quality of prenatal care in the public health system in Greater Metropolitan Vitória fails to meet the Brazilian national guidelines for maternal and child health.

  6. 76 FR 53912 - FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-30

    ...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...

  7. Tests for the Assessment of Sport-Specific Performance in Olympic Combat Sports: A Systematic Review With Practical Recommendations

    PubMed Central

    Chaabene, Helmi; Negra, Yassine; Bouguezzi, Raja; Capranica, Laura; Franchini, Emerson; Prieske, Olaf; Hbacha, Hamdi; Granacher, Urs

    2018-01-01

    The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports. PMID:29692739

  8. Multirate control with incomplete information over Profibus-DP network

    NASA Astrophysics Data System (ADS)

    Salt, J.; Casanova, V.; Cuenca, A.; Pizá, R.

    2014-07-01

    When a process field bus-decentralized peripherals (Profibus-DP) network is used in an industrial environment, a deterministic behaviour is usually claimed. However, due to some concerns such as bandwidth limitations, lack of synchronisation among different clocks and existence of time-varying delays, a more complex problem must be faced. This problem implies the transmission of irregular and, even, random sequences of incomplete information. The main consequence of this issue is the appearance of different sampling periods at different network devices. In this paper, this aspect is checked by means of a detailed Profibus-DP timescale study. In addition, in order to deal with the different periods, a delay-dependent dual-rate proportional-integral-derivative control is introduced. Stability for the proposed control system is analysed in terms of linear matrix inequalities.

  9. Assessing the risk of bias in randomized controlled trials in the field of dentistry indexed in the Lilacs (Literatura Latino-Americana e do Caribe em Ciências da Saúde) database.

    PubMed

    Ferreira, Christiane Alves; Loureiro, Carlos Alfredo Salles; Saconato, Humberto; Atallah, Alvaro Nagib

    2011-03-01

    Well-conducted randomized controlled trials (RCTs) represent the highest level of evidence when the research question relates to the effect of therapeutic or preventive interventions. However, the degree of control over bias between RCTs presents great variability between studies. For this reason, with the increasing interest in and production of systematic reviews and meta-analyses, it has been necessary to develop methodology supported by empirical evidence, so as to encourage and enhance the production of valid RCTs with low risk of bias. The aim here was to conduct a methodological analysis within the field of dentistry, regarding the risk of bias in open-access RCTs available in the Lilacs (Literatura Latino-Americana e do Caribe em Ciências da Saúde) database. This was a methodology study conducted at Universidade Federal de São Paulo (Unifesp) that assessed the risk of bias in RCTs, using the following dimensions: allocation sequence generation, allocation concealment, blinding, and data on incomplete outcomes. Out of the 4,503 articles classified, only 10 studies (0.22%) were considered to be true RCTs and, of these, only a single study was classified as presenting low risk of bias. The items that the authors of these RCTs most frequently controlled for were blinding and data on incomplete outcomes. The effective presence of bias seriously weakened the reliability of the results from the dental studies evaluated, such that they would be of little use for clinicians and administrators as support for decision-making processes.

  10. 75 FR 39618 - Proposed Information Collection (Request for Identifying Information Re: Veteran's Loan Records...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ..., benefits will not be paid or furnished by reason of an incomplete application. Affected Public: Individuals... Benefits Administration, Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Veterans Benefits....Regulations.gov or to Nancy J. Kessinger, Veterans Benefits Administration (20M35), Department of Veterans...

  11. Three Essays on the Economics of Search

    ERIC Educational Resources Information Center

    Koulayev, Sergei

    2010-01-01

    This dissertation studies consumer search behavior in markets where buyers have incomplete information about available goods, such as markets with many sellers or frequently changing prices. In these markets, consumers engage in costly search in order to collect information necessary for making a purchase. Our method of investigation combines…

  12. The values jury to aid natural resource decisions

    Treesearch

    Thomas C. Brown; George L. Peterson; Bruce E. Tonn

    1995-01-01

    Congressional legislation emphasizes that public resource allocation should reflect the values citizens assign to those resources. Yet, information about assigned values and preferences of members of the public, including economic measures of value, required by decision makers is often incomplete or unavailable. Existing sources of information about the public's...

  13. On the Efficient Allocation of Resources for Hypothesis Evaluation in Machine Learning: A Statistical Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.; Burl, M.

    1994-01-01

    In this report we consider a decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g., a finite number of observations): the system can, however, gather additional information at some cost.

  14. Efficiently Ranking Hyphotheses in Machine Learning

    NASA Technical Reports Server (NTRS)

    Chien, Steve

    1997-01-01

    This paper considers the problem of learning the ranking of a set of alternatives based upon incomplete information (e.g. a limited number of observations). At each decision cycle, the system can output a complete ordering on the hypotheses or decide to gather additional information (e.g. observation) at some cost.

  15. Deaths in natural hazards in the solomon islands.

    PubMed

    Blong, R J; Radford, D A

    1993-03-01

    Archival and library search techniques have been used to establish extensive databases on deaths and damage resulting from natural hazards in the Solomon Islands. Although the records of fatalities are certainly incomplete, volcanic eruptions, tropical cyclones, landslides, tsunami and earthquakes appear to have been the most important. Only 22 per cent of the recorded deaths have resulted from meteorological hazards but a single event could change this proportion significantly. Five events in the fatality database account for 88 per cent of the recorded deaths. Future death tolls are also likely to be dominated by a small number of events. While the expected number of deaths in a given period is dependent upon the length of record considered, it is clear that a disaster which kills one hundred or more people in the Solomons can be expected more frequently than once in a hundred years.

  16. Surgical versus expectant management in women with an incomplete evacuation of the uterus after treatment with misoprostol for miscarriage: the MisoREST trial

    PubMed Central

    2013-01-01

    Background Medical treatment with misoprostol is a non-invasive and inexpensive treatment option in first trimester miscarriage. However, about 30% of women treated with misoprostol have incomplete evacuation of the uterus. Despite being relatively asymptomatic in most cases, this finding often leads to additional surgical treatment (curettage). A comparison of effectiveness and cost-effectiveness of surgical management versus expectant management is lacking in women with incomplete miscarriage after misoprostol. Methods/Design The proposed study is a multicentre randomized controlled trial that assesses the costs and effects of curettage versus expectant management in women with incomplete evacuation of the uterus after misoprostol treatment for first trimester miscarriage. Eligible women will be randomized, after informed consent, within 24 hours after identification of incomplete evacuation of the uterus by ultrasound scanning. Women are randomly allocated to surgical or expectant management. Curettage is performed within three days after randomization. Primary outcome is the sonographic finding of an empty uterus (maximal diameter of any contents of the uterine cavity < 10 millimeters) six weeks after study entry. Secondary outcomes are patients’ quality of life, surgical outcome parameters, the type and number of re-interventions during the first three months and pregnancy rates and outcome 12 months after study entry. Discussion This trial will provide evidence for the (cost) effectiveness of surgical versus expectant management in women with incomplete evacuation of the uterus after misoprostol treatment for first trimester miscarriage. Trial registration Dutch Trial Register: NTR3110 PMID:23638956

  17. Management of incomplete abortion in South African public hospitals.

    PubMed

    Brown, H C; Jewkes, R; Levin, J; Dickson-Tetteh, K; Rees, H

    2003-04-01

    To describe the current management of incomplete abortion in South African public hospitals and to discuss the extent to which management is clinically appropriate. A multicentre, prospective descriptive study. South African public hospitals that manage gynaecological emergencies. Hospitals were selected using a stratified random sampling method. All women who presented to the above sampled hospitals with incomplete abortion during the three week data collection period in 2000 were included. A data collection sheet was completed at the time of discharge for each woman admitted with a diagnosis of incomplete, complete, missed or inevitable abortion during the study period. Information gathered included demographic data, clinical signs and symptoms at admission, medical management, surgical management, anaestetic management, use of blood products and antibiotics and complications. Three clinical severity categories were used for the purpose of data analysis and interpretation. Detail of medical management, detail of surgical management, use of blood products and antibiotics, methods of analgesia and anaesthesia used, and use of abortifacients. There is a trend towards low cost technology such as the use of manual vacuum aspiration and sedation anaesthesia; however, this is mainly limited to the better resourced tertiary hospitals linked to academic units. The use of antibiotics and blood products has decreased but much of the use is inappropriate. The use of abortifacients does include some use of misoprostol but merely as an adjunct to surgical evacuation. The management of incomplete abortion remains a problem in South Africa, a low income country that is still managing a common clinical problem with costly interventions. The evidence of a trend towards low cost technology is promising, albeit limited to tertiary centres. This study has given us information as how to best address this problem. More training in low cost methods is needed, targeting in particular the district and regional hospitals, and reinforced by skills training focussed mainly on undergraduates and midwife post-abortion care programmes.

  18. Comparison of Online Agricultural Information Services.

    ERIC Educational Resources Information Center

    Reneau, Fred; Patterson, Richard

    1984-01-01

    Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…

  19. WMC Database Evaluation. Case Study Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palounek, Andrea P. T

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  20. MIPS: analysis and annotation of proteins from whole genomes

    PubMed Central

    Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354

  1. MIPS: analysis and annotation of proteins from whole genomes.

    PubMed

    Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).

  2. The Inadequacy of Pediatric Fracture Care Information in Emergency Medicine and Pediatric Literature and Online Resources.

    PubMed

    Tileston, Kali; Bishop, Julius A

    2015-01-01

    Emergency medicine and pediatric physicians often provide initial pediatric fracture care. Therefore, basic knowledge of the various treatment options is essential. The purpose of this study was to determine the accuracy of information commonly available to these physicians in textbooks and online regarding the management of pediatric supracondylar humerus and femoral shaft fractures. The American Academy of Orthopaedic Surgeons Clinical Practice Guidelines for pediatric supracondylar humerus and femoral shaft fractures were used to assess the content of top selling emergency medicine and pediatric textbooks as well as the top returned Web sites after a Google search. Only guidelines that addressed initial patient management were included. Information provided in the texts was graded as consistent, inconsistent, or omitted. Five emergency medicine textbooks, 4 pediatric textbooks, and 5 Web sites were assessed. Overall, these resources contained a mean 31.6% (SD=32.5) complete and correct information, whereas 3.6 % of the information was incorrect or inconsistent, and 64.8% was omitted. Emergency medicine textbooks had a mean of 34.3% (SD=28.3) correct and complete recommendations, 5.7% incorrect or incomplete recommendations, and 60% omissions. Pediatric textbooks were poor in addressing any of the American Academy of Orthopaedic Surgeons guidelines with an overall mean of 7.14% (SD=18.9) complete and correct recommendations, a single incorrect/incomplete recommendation, and 91.1% omissions. Online resources had a mean of 48.6% (SD=33.1) complete and correct recommendations, 5.72% incomplete or incorrect recommendations, and 45.7% omissions. This study highlights important deficiencies in resources available to pediatric and emergency medicine physicians seeking information on pediatric fracture management. Information in emergency medicine and pediatric textbooks as well as online is variable, with both inaccuracies and omissions being common. This lack of high-quality information could compromise patient care. Resources should be committed to ensuring accurate and complete information is readily available to all physicians providing pediatric fracture care. In addition, orthopaedic surgeons should take an active role to ensure that nonorthopaedic textbooks and online resources contain complete and accurate information.

  3. Three Library and Information Science Databases Revisited: Currency, Coverage and Overlap, Interindexing Consistency.

    ERIC Educational Resources Information Center

    Blackwell, Michael Lind

    This study evaluates the "Education Resources Information Center" (ERIC), "Library and Information Science Abstracts" (LISA), and "Library Literature" (LL) databases, determining how long the databases take to enter records (indexing delay), how much duplication of effort exists among the three databases (indexing…

  4. An Integrated Molecular Database on Indian Insects.

    PubMed

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  5. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  6. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  7. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  8. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  9. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  10. Recalls of Vehicles and Engines

    EPA Pesticide Factsheets

    n cases where EPA has determines that manufacturers have provided inaccurate, incomplete or falsified certification information or failed to keep required records, the Clean Air Act gives the EPA the authority to void certificates.

  11. A Multilevel Comprehensive Assessment of International Accreditation for Business Programmes-Based on AMBA Accreditation of GDUFS

    ERIC Educational Resources Information Center

    Jiang, Yong

    2017-01-01

    Traditional mathematical methods built around exactitude have limitations when applied to the processing of educational information, due to their uncertainty and imperfection. Alternative mathematical methods, such as grey system theory, have been widely applied in processing incomplete information systems and have proven effective in a number of…

  12. Ideology and Critical Self-Reflection in Information Literacy Instruction

    ERIC Educational Resources Information Center

    Critten, Jessica

    2015-01-01

    Information literacy instruction traditionally focuses on evaluating a source for bias, relevance, and timeliness, and rightfully so; this critical perspective is vital to a well-formed research process. However, this process is incomplete without a similar focus on the potential biases that the student brings to his or her interactions with…

  13. The Protein Information Resource: an integrated public resource of functional annotation of proteins

    PubMed Central

    Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.

    2002-01-01

    The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247

  14. Acupuncture and vitamin B12 injection for Bell’s palsy: no high-quality evidence exists

    PubMed Central

    Wang, Li-li; Guan, Ling; Hao, Peng-liang; Du, Jin-long; Zhang, Meng-xue

    2015-01-01

    OBJECTIVE: To assess the efficacy of acupuncture combined with vitamin B12 acupoint injection versus acupuncture alone to reduce incomplete recovery in patients with Bell’s palsy. DATA RETRIEVAL: A computer-based online retrieval of Medline, Web of Science, CNKI, CBM databases until April 2014 was performed for relevant trials, using the key words “Bell’s palsy or idiopathic facial palsy or facial palsy” and “acupuncture or vitamin B12 or methylcobalamin”. STUDY SELECTION: All randomized controlled trials that compared acupuncture with acupuncture combined with vitamin B12 in patients with Bell’s palsy were included in the meta-analysis. The initial treatment lasted for at least 4 weeks. The outcomes of incomplete facial recovery were monitored. The scoring index varied and the definition of healing was consistent. The combined effect size was calculated by using relative risk (RR) with 95% confidence interval (CI) using the fixed effect model of Review Manager. MAIN OUTCOME MEASURES: Incomplete recovery rates were chosen as the primary outcome. RESULTS: Five studies involving 344 patients were included in the final analysis. Results showed that the incomplete recovery rate of Bell’s palsy patients was 44.50% in the acupuncture combined with vitamin B12 group but 62.57% in the acupuncture alone group. The major acupoints were Taiyang (EX-HN5), Jiache (ST6), Dicang (ST4) and Sibai (ST2). The combined effect size showed that acupuncture combined with vitamin B12 was better than acupuncture alone for the treatment of Bell’s palsy (RR = 0.71, 95%CI: 0.58–0.87; P = 0.001), this result held true when 8 patients lost to follow up in one study were included into the analyses (RR = 0.70, 95%CI: 0.58–0.86; P = 0.0005). In the subgroup analyses, the therapeutic effect in patients of the electroacupuncture subgroup was better than in the non-electroacupuncture subgroup (P = 0.024). There was no significant difference in the incomplete recovery rate by subgroup analysis on drug types and treatment period. Most of the included studies were moderate or low quality, and bias existed. CONCLUSION: In patients with Bell’s palsy, acupuncture combined with vitamin B12 can reduce the risk of incomplete recovery compared with acupuncture alone in our meta-analysis. Because of study bias and methodological limitations, this conclusion is uncertain and the clinical application of acupuncture combined with vitamin B12 requires further exploration. PMID:26109959

  15. The integrated web service and genome database for agricultural plants with biotechnology information.

    PubMed

    Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.

  16. MimoSA: a system for minimotif annotation

    PubMed Central

    2010-01-01

    Background Minimotifs are short peptide sequences within one protein, which are recognized by other proteins or molecules. While there are now several minimotif databases, they are incomplete. There are reports of many minimotifs in the primary literature, which have yet to be annotated, while entirely novel minimotifs continue to be published on a weekly basis. Our recently proposed function and sequence syntax for minimotifs enables us to build a general tool that will facilitate structured annotation and management of minimotif data from the biomedical literature. Results We have built the MimoSA application for minimotif annotation. The application supports management of the Minimotif Miner database, literature tracking, and annotation of new minimotifs. MimoSA enables the visualization, organization, selection and editing functions of minimotifs and their attributes in the MnM database. For the literature components, Mimosa provides paper status tracking and scoring of papers for annotation through a freely available machine learning approach, which is based on word correlation. The paper scoring algorithm is also available as a separate program, TextMine. Form-driven annotation of minimotif attributes enables entry of new minimotifs into the MnM database. Several supporting features increase the efficiency of annotation. The layered architecture of MimoSA allows for extensibility by separating the functions of paper scoring, minimotif visualization, and database management. MimoSA is readily adaptable to other annotation efforts that manually curate literature into a MySQL database. Conclusions MimoSA is an extensible application that facilitates minimotif annotation and integrates with the Minimotif Miner database. We have built MimoSA as an application that integrates dynamic abstract scoring with a high performance relational model of minimotif syntax. MimoSA's TextMine, an efficient paper-scoring algorithm, can be used to dynamically rank papers with respect to context. PMID:20565705

  17. Quantification of missing prescriptions in commercial claims databases: results of a cohort study.

    PubMed

    Cepeda, Maria Soledad; Fife, Daniel; Denarié, Michel; Bradford, Dan; Roy, Stephanie; Yuan, Yingli

    2017-04-01

    This study aims to quantify the magnitude of missed dispensings in commercial claims databases. A retrospective cohort study has been used linking PharMetrics, a commercial claims database, to a prescription database (LRx) that captures pharmacy dispensings independently of payment method, including cash transactions. We included adults with dispensings for opioids, diuretics, antiplatelet medications, or anticoagulants. To determine the degree of capture of dispensings, we calculated the number of subjects with the following: (1) same number of dispensings in both databases; (2) at least one dispensing, but not all dispensings, missed in PharMetrics; and (3) all dispensings missing in PharMetrics. Similar analyses were conducted using dispensings as the unit of analysis. To assess whether a dispensing in LRx was in PharMetrics, the dispensing in PharMetrics had to be for the same medication class and within ±7 days in LRx. A total of 1 426 498 subjects were included. Overall, 68% of subjects had the same number of dispensings in both databases. In 13% of subjects, PharMetrics identified ≥1 dispensing but also missed ≥1 dispensing. In 19% of the subjects, PharMetrics missed all the dispensings. Taking dispensings as the unit of analysis, 25% of the dispensings present in LRx were not captured in PharMetrics. These patterns were similar across all four classes of medications. Of the dispensings missing in PharMetrics, 48% involved a subject who had >1 health insurance plan. Commercial claims databases provide an incomplete picture of all prescriptions dispensed to patients. The lack of capture goes beyond cash transactions and potentially introduces substantial misclassification bias. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  18. The use and misuse of biomedical data: is bigger really better?

    PubMed

    Hoffman, Sharona; Podgurski, Andy

    2013-01-01

    Very large biomedical research databases, containing electronic health records (EHR) and genomic data from millions of patients, have been heralded recently for their potential to accelerate scientific discovery and produce dramatic improvements in medical treatments. Research enabled by these databases may also lead to profound changes in law, regulation, social policy, and even litigation strategies. Yet, is "big data" necessarily better data? This paper makes an original contribution to the legal literature by focusing on what can go wrong in the process of biomedical database research and what precautions are necessary to avoid critical mistakes. We address three main reasons for approaching such research with care and being cautious in relying on its outcomes for purposes of public policy or litigation. First, the data contained in biomedical databases is surprisingly likely to be incorrect or incomplete. Second, systematic biases, arising from both the nature of the data and the preconceptions of investigators, are serious threats to the validity of research results, especially in answering causal questions. Third, data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate ostensibly scientific but misleading research findings for the purpose of manipulating public opinion and swaying policymakers. In short, this paper sheds much-needed light on the problems of credulous and uninformed acceptance of research results derived from biomedical databases. An understanding of the pitfalls of big data analysis is of critical importance to anyone who will rely on or dispute its outcomes, including lawyers, policymakers, and the public at large. The Article also recommends technical, methodological, and educational interventions to combat the dangers of database errors and abuses.

  19. Problem of two-level hierarchical minimax program control the final state of regional social and economic system with incomplete information

    NASA Astrophysics Data System (ADS)

    Shorikov, A. F.

    2016-12-01

    In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.

  20. Converging on the optimal attainment of requirements

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Menzies, T.

    2002-01-01

    Planning for the optimal attainment of requirements is an important early lifecycle activity. However, such planning is difficult when dealing with competing requirements, limited resources, and the incompleteness of information available at requirements time.

  1. 40 CFR 725.33 - Incomplete submissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... which the submitter believes show that the microorganism will not present an unreasonable risk of injury... Director, or a designee, may inform the submitter that the running of the review period will resume on the...

  2. The Effect of Augmenting OPTN Data With External Death Data on Calculating Patient Survival Rates After Organ Transplantation.

    PubMed

    Wilk, Amber R; Edwards, Leah B; Edwards, Erick B

    2017-04-01

    Although the Organ Procurement and Transplantation Network (OPTN) database contains a rich set of data on United States transplant recipients, follow-up data may be incomplete. It was of interest to determine if augmenting OPTN data with external death data altered patient survival estimates. Solitary kidney, liver, heart, and lung transplants performed between January 1, 2011, and January 31, 2013, were queried from the OPTN database. Unadjusted Kaplan-Meier 3-year patient survival rates were computed using 4 nonmutually exclusive augmented datasets: OPTN only, OPTN + verified external deaths, OPTN + verified + unverified external deaths (OPTN + all), and an additional source extending recipient survival time if no death was found in OPTN + all (OPTN + all [Assumed Alive]). Pairwise comparisons were made using unadjusted Cox Proportional Hazards analyses applying Bonferroni adjustments. Although differences in patient survival rates across data sources were small (≤1 percentage point), OPTN only data often yielded slightly higher patient survival rates than sources including external death data. No significant differences were found, including comparing OPTN + verified (hazard ratio [HR], 1.05; 95% confidence interval [95% CI], 1.00-1.10); P = 0.0356), OPTN + all (HR, 1.06; 95% CI, 1.01-1.11; P = 0.0243), and OPTN + all (Assumed Alive) (HR, 1.00; 95% CI, 0.96-1.05; P = 0.8587) versus OPTN only, or OPTN + verified (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0511), and OPTN + all (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0353) versus OPTN + all (Assumed Alive). Patient survival rates varied minimally with augmented data sources, although using external death data without extending the survival time of recipients not identified in these sources results in a biased estimate. It remains important for transplant centers to maintain contact with transplant recipients and obtain necessary follow-up information, because this information can improve the transplantation process for future recipients.

  3. Quantum-like Probabilistic Models Outside Physics

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g. by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application — QL modeling of brain's functioning. The brain is modeled as a QL-computer.

  4. Nutrient Loadings to Streams of the Continental United States from Municipal and Industrial Effluent

    USGS Publications Warehouse

    Maupin, M.A.; Ivahnenko, T.

    2011-01-01

    Data from the United States Environmental Protection Agency Permit Compliance System national database were used to calculate annual total nitrogen (TN) and total phosphorus (TP) loads to surface waters from municipal and industrial facilities in six major regions of the United States for 1992, 1997, and 2002. Concentration and effluent flow data were examined for approximately 118,250 facilities in 45 states and the District of Columbia. Inconsistent and incomplete discharge locations, effluent flows, and effluent nutrient concentrations limited the use of these data for calculating nutrient loads. More concentrations were reported for major facilities, those discharging more than 1million gallons per day, than for minor facilities, and more concentrations were reported for TP than for TN. Analytical methods to check and improve the quality of the Permit Compliance System data were used. Annual loads were calculated using "typical pollutant concentrations" to supplement missing concentrations based on the type and size of facilities. Annual nutrient loads for over 26,600 facilities were calculated for at least one of the three years. Sewage systems represented 74% of all TN loads and 58% of all TP loads. This work represents an initial set of data to develop a comprehensive and consistent national database of point-source nutrient loads. These loads can be used to inform a wide range of water-quality management, watershed modeling, and research efforts at multiple scales. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  5. Transmembrane proteins in the Protein Data Bank: identification and classification.

    PubMed

    Tusnády, Gábor E; Dosztányi, Zsuzsanna; Simon, István

    2004-11-22

    Integral membrane proteins play important roles in living cells. Although these proteins are estimated to constitute 25% of proteins at a genomic scale, the Protein Data Bank (PDB) contains only a few hundred membrane proteins due to the difficulties with experimental techniques. The presence of transmembrane proteins in the structure data bank, however, is quite invisible, as the annotation of these entries is rather poor. Even if a protein is identified as a transmembrane one, the possible location of the lipid bilayer is not indicated in the PDB because these proteins are crystallized without their natural lipid bilayer, and currently no method is publicly available to detect the possible membrane plane using the atomic coordinates of membrane proteins. Here, we present a new geometrical approach to distinguish between transmembrane and globular proteins using structural information only and to locate the most likely position of the lipid bilayer. An automated algorithm (TMDET) is given to determine the membrane planes relative to the position of atomic coordinates, together with a discrimination function which is able to separate transmembrane and globular proteins even in cases of low resolution or incomplete structures such as fragments or parts of large multi chain complexes. This method can be used for the proper annotation of protein structures containing transmembrane segments and paves the way to an up-to-date database containing the structure of all known transmembrane proteins and fragments (PDB_TM) which can be automatically updated. The algorithm is equally important for the purpose of constructing databases purely of globular proteins.

  6. Connections between survey calibration estimators and semiparametric models for incomplete data

    PubMed Central

    Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.

    2012-01-01

    Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390

  7. 21 CFR 830.350 - Correction of information submitted to the Global Unique Device Identification Database.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Unique Device Identification Database. 830.350 Section 830.350 Food and Drugs FOOD AND DRUG... Global Unique Device Identification Database § 830.350 Correction of information submitted to the Global Unique Device Identification Database. (a) If FDA becomes aware that any information submitted to the...

  8. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    NASA Astrophysics Data System (ADS)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  9. Unenhanced MR Angiography of Uterine and Ovarian Arteries after Uterine Artery Embolization: Differences between Patients with Incomplete and Complete Fibroid Infarction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Kensaku, E-mail: moriken@md.tsukuba.ac.jp; Saida, Tsukasa; Shibuya, Yoko

    Purpose: To compare the status of uterine and ovarian arteries after uterine artery embolization (UAE) in patients with incomplete and complete fibroid infarction via unenhanced 3D time-of-flight magnetic resonance (MR) angiography. Materials and Methods: Thirty-five consecutive women (mean age 43 years; range 26-52 years) with symptomatic uterine fibroids underwent UAE and MR imaging before and within 2 months after UAE. The patients were divided into incomplete and complete fibroid infarction groups on the basis of the postprocedural gadolinium-enhanced MR imaging findings. Two independent observers reviewed unenhanced MR angiography before and after UAE to determine bilateral uterine and ovarian arterial flowmore » scores. The total arterial flow scores were calculated by summing the scores of the 4 arteries. All scores were compared with the Mann-Whitney test. Results: Fourteen and 21 patients were assigned to the incomplete and complete fibroid infarction groups, respectively. The total arterial flow score in the incomplete fibroid infarction group was significantly greater than that in the complete fibroid infarction group (P = 0.019 and P = 0.038 for observers 1 and 2, respectively). In 3 patients, additional therapy was recommended for insufficient fibroid infarction. In 1 of the 3 patients, bilateral ovarian arteries were invisible before UAE but seemed enlarged after UAE. Conclusion: The total arterial flow from bilateral uterine and ovarian arteries in patients with incomplete fibroid infarction is less well reduced than in those with complete fibroid infarction. Postprocedural MR angiography provides useful information to estimate the cause of insufficient fibroid infarction in individual cases.« less

  10. An Innovative Approach to Improve Completeness of Treatment and Other Key Data Elements in a Population-Based Cancer Registry: A15-Month Data Submission.

    PubMed

    Hsieh, Mei-Chin; Mumphrey, Brent; Pareti, Lisa; Yi, Yong; Wu, Xiao-Cheng

    2017-01-01

    BACKGROUND: In order to comply with the Louisiana legislative obligation and meet funding agencies’ requirement of case completeness for 12-month data submission, hospital cancer registries are mandated to submit cancer incidence data to the Louisiana Tumor Registry (LTR) within 6 months of diagnosis. However, enforcing compliance with timely reporting may result in incomplete data on adjuvant treatment received by the LTR. Although additional treatment information can be obtained via retransmission of the North American Association of Central Cancer Registries (NAACCR)–modified abstracts, consolidating multiple NAACCR-modified abstracts for the same case is extremely time consuming. To avoid a huge amount of work while obtaining timely and complete data, the LTR has requested hospital cancer registries resubmit their data 15 months after the close of the diagnosis year. The purpose of this report is to assess the improvement in the completeness of data items related to treatment, staging and site specific factors. METHODS: The LTR requested that hospital cancer registries resubmit 15-month data between April 1, 2016 and April 15, 2016 for cases diagnosed in 2014. Microsoft Visual Studio Visual Basic script was used to link and compare resubmitted data with existing data in the LTR database. Data elements used for matching same patient/tumor were name, Social Security number, date of birth, primary site, laterality, and hospital identifier number. Treatment data items were compared as known vs none/ unknown and known vs known with different code. Matched records with updated information were imported into the LTR database and flagged as modified abstract records for manual consolidation. Nonmatched records were also loaded in the LTR database as potential new cases for further investigation. RESULTS: A total of 25,207 resubmitted NAACCR abstracts were received from 38 hospitals and freestanding radiation centers. About 11.1% had at least 1 update related to treatment and/or other data item; an average of 3.3 updates per updated abstract. The majority of the updates (45.7%) for treatment were changes from none/unknown to known value and 42.6% of the updates were related to radiation treatment fields. In addition, 172 potential new cases were identified. Approximately 10.5% (18 cases) of these new cases were confirmed to be truly missed cases after investigation. CONCLUSION: The 15-month data resubmission is a cost-effective approach to obtaining complete information on treatment and other key data items from reporting facilities and can also be used to identify potential missed cases.

  11. Evaluation of consumer drug information databases.

    PubMed

    Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J

    1999-01-01

    To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.

  12. The integrated web service and genome database for agricultural plants with biotechnology information

    PubMed Central

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  13. The development of digital library system for drug research information.

    PubMed

    Kim, H J; Kim, S R; Yoo, D S; Lee, S H; Suh, O K; Cho, J H; Shin, H T; Yoon, J P

    1998-01-01

    The sophistication of computer technology and information transmission on internet has made various cyber information repository available to information consumers. In the era of information super-highway, the digital library which can be accessed from remote sites at any time is considered the prototype of information repository. Using object-oriented DBMS, the very first model of digital library for pharmaceutical researchers and related professionals in Korea has been developed. The published research papers and researchers' personal information was included in the database. For database with research papers, 13 domestic journals were abstracted and scanned for full-text image files which can be viewed by Internet web browsers. The database with researchers' personal information was also developed and interlinked to the database with research papers. These database will be continuously updated and will be combined with world-wide information as the unique digital library in the field of pharmacy.

  14. 75 FR 49489 - Establishment of a New System of Records for Personal Information Collected by the Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-13

    ... information. Access to any such database system is limited to system administrators, individuals responsible... during the certification process. The above information will be contained in one or more databases (such as Lotus Notes) that reside on servers in EPA offices. The database(s) may be specific to one...

  15. Is Darwinism Dead?

    ERIC Educational Resources Information Center

    Futuyma, Douglas J.

    1985-01-01

    Outlines principles of evolutionary theory, including such recent changes as punctuated equilibria. Indicates that the incompleteness of Darwin's theory has been replaced with a conceptual framework and empirical information. Controversial issues remain, but the basic ideas still stand strong. (DH)

  16. Actions to Void Certificates for Vehicle and Engines

    EPA Pesticide Factsheets

    n cases where EPA has determines that manufacturers have provided inaccurate, incomplete or falsified certification information or failed to keep required records, the Clean Air Act gives the EPA the authority to void certificates.

  17. 77 FR 4545 - Notice of Proposed Information Collection Requests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-30

    ... activities, prior to the usage of the TTT survey, missing or incomplete data made it difficult to aggregate... the ``Browse Pending Collections'' link and by clicking on link number 4794. When you access the...

  18. NABIC marker database: A molecular markers information network of agricultural crops.

    PubMed

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  19. 77 FR 24925 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...

  20. 78 FR 65293 - Collection of Information; Proposed Extension of Approval; Comment Request-Publicly Available...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... Extension of Approval; Comment Request--Publicly Available Consumer Product Safety Information Database... Publicly Available Consumer Product Safety Information Database. The Commission will consider all comments... intention to seek extension of approval of a collection of information for a database on the safety of...

  1. 78 FR 18232 - Amendment of VOR Federal Airway V-233, Springfield, IL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... it matches the information contained in the FAA's aeronautical database, matches the depiction on the... description did not match the airway information contained in the FAA's aeronautical database or the charted... information that should have been used. The FAA aeronautical database contains the correct radial information...

  2. Database Management: Building, Changing and Using Databases. Collected Papers and Abstracts of the Mid-Year Meeting of the American Society for Information Science (15th, Portland, Oregon, May 1986).

    ERIC Educational Resources Information Center

    American Society for Information Science, Washington, DC.

    This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…

  3. Raising orphans from a metadata morass: A researcher's guide to re-use of public 'omics data.

    PubMed

    Bhandary, Priyanka; Seetharam, Arun S; Arendsee, Zebulun W; Hur, Manhoi; Wurtele, Eve Syrkin

    2018-02-01

    More than 15 petabases of raw RNAseq data is now accessible through public repositories. Acquisition of other 'omics data types is expanding, though most lack a centralized archival repository. Data-reuse provides tremendous opportunity to extract new knowledge from existing experiments, and offers a unique opportunity for robust, multi-'omics analyses by merging metadata (information about experimental design, biological samples, protocols) and data from multiple experiments. We illustrate how predictive research can be accelerated by meta-analysis with a study of orphan (species-specific) genes. Computational predictions are critical to infer orphan function because their coding sequences provide very few clues. The metadata in public databases is often confusing; a test case with Zea mays mRNA seq data reveals a high proportion of missing, misleading or incomplete metadata. This metadata morass significantly diminishes the insight that can be extracted from these data. We provide tips for data submitters and users, including specific recommendations to improve metadata quality by more use of controlled vocabulary and by metadata reviews. Finally, we advocate for a unified, straightforward metadata submission and retrieval system. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Recommendations of the 2006 Human Variome Project meeting.

    PubMed

    Cotton, Richard G H; Appelbe, William; Auerbach, Arleen D; Becker, Kevin; Bodmer, Walter; Boone, D Joe; Boulyjenkov, Victor; Brahmachari, Samir; Brody, Lawrence; Brookes, Anthony; Brown, Alastair F; Byers, Peter; Cantu, Jose Maria; Cassiman, Jean-Jacques; Claustres, Mireille; Concannon, Patrick; Cotton, Richard G H; den Dunnen, Johan T; Flicek, Paul; Gibbs, Richard; Hall, Judith; Hasler, Julia; Katz, Michael; Kwok, Pui-Yan; Laradi, Sandrine; Lindblom, Annika; Maglott, Donna; Marsh, Steven; Masimirembwa, Collen Muto; Minoshima, Shinsei; de Ramirez, Ana Maria Oller; Pagon, Roberta; Ramesar, Raj; Ravine, David; Richards, Sue; Rimoin, David; Ring, Huijun Z; Scriver, Charles R; Sherry, Stephen; Shimizu, Nobuyoshi; Stein, Lincoln; Tadmouri, Ghazi Omar; Taylor, Graham; Watson, Michael

    2007-04-01

    Lists of variations in genomic DNA and their effects have been kept for some time and have been used in diagnostics and research. Although these lists have been carefully gathered and curated, there has been little standardization and coordination, complicating their use. Given the myriad possible variations in the estimated 24,000 genes in the human genome, it would be useful to have standard criteria for databases of variation. Incomplete collection and ascertainment of variants demonstrates a need for a universally accessible system. These and other problems led to the World Heath Organization-cosponsored meeting on June 20-23, 2006 in Melbourne, Australia, which launched the Human Variome Project. This meeting addressed all areas of human genetics relevant to collection of information on variation and its effects. Members of each of eight sessions (the clinic and phenotype, the diagnostic laboratory, the research laboratory, curation and collection, informatics, relevance to the emerging world, integration and federation and funding and sustainability) developed a number of recommendations that were then organized into a total of 96 recommendations to act as a foundation for future work worldwide. Here we summarize the background of the project, the meeting and its recommendations.

  5. Measuring Self-Care in Persons With Type 2 Diabetes: A Systematic Review

    PubMed Central

    Lu, Yan; Xu, Jiayun; Zhao, Weigang; Han, Hae-Ra

    2015-01-01

    This systematic review examines the characteristics and psychometric properties of the instruments used to assess self-care behaviors among persons with type 2 diabetes. Electronic databases were searched for relevant studies published in English within the past 20 years. Thirty different instruments were identified in 75 articles: 18 original instruments on type 2 diabetes mellitus self-care, 8 translated or revised version, and 4 not specific but relevant to diabetes. Twenty-one instruments were multidimensional and addressed multiple dimensions of self-care behavior. Nine were unidimensional: three focusing exclusively on medication taking, three on diet, one on physical activity, one on self-monitoring of blood glucose, and one on oral care. Most instruments (22 of 30) were developed during the last decade. Only 10 were repeated more than once. Nineteen of the 30 instruments reported both reliability and validity information but with varying degrees of rigor. In conclusion, most instruments used to measure self-care were relatively new and had been applied to only a limited number of studies with incomplete psychometric profiles. Rigorous psychometric testing, operational definition of self-care, and sufficient explanation of scoring need to be considered for further instrument development. PMID:26130465

  6. The use of haemostatic gelatin sponges in veterinary surgery.

    PubMed

    Charlesworth, T M; Agthe, P; Moores, A; Anderson, D M

    2012-01-01

    To describe the use of absorbable gelatin sponges as haemostatic implants in clinical veterinary surgical cases and to document any related postoperative complications. Practice databases were searched for the product names "Gelfoam" and "Spongostan". Patient records were retrieved and data regarding patient signalment, surgical procedure, National Resource Council (NRC) wound classification, source of haemorrhage, pre- and postoperative body temperature, postoperative complications, time to discharge and details of any postoperative imaging were recorded and reviewed. Follow-up information was obtained by repeat clinical examination or telephone interview with either the owner or referring veterinary surgeon. Cases with incomplete surgical records or those which were not recovered from anaesthesia were excluded from the analysis. Fifty cases (44 dogs and 6 cats) satisfied the inclusion criteria. Satisfactory haemostasis was achieved in 49 cases with one case requiring reoperation during which a second gelatin sponge was used. There were no detected hypersensitivity responses or confirmed postoperative complications relating to the use of gelatin sponges during the follow-up period (median 13 months). This is the first review of the use of gelatin sponges in clinical veterinary surgery and suggests that gelatin sponges are safe to use in cats and dogs. © 2011 British Small Animal Veterinary Association.

  7. Twenty five years long survival analysis of an individual shortleaf pine trees

    Treesearch

    Pradip Saud; Thomas B. Lynch; James M. Guldin

    2016-01-01

    A semi parametric cox proportion hazard model is preferred when censored data and survival time information is available (Kleinbaum and Klein 1996; Alison 2010). Censored data are observations that have incomplete information related to survival time or event time of interest. In repeated forest measurements, usually observations are either right censored or...

  8. Prevalence and Associated Risk Factors of Anemia in Children and Adolescents with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Lin, Jin-Ding; Lin, Pei-Ying; Lin, Lan-Ping; Hsu, Shang-Wei; Loh, Ching-Hui; Yen, Chia-Feng; Fang, Wen-Hui; Chien, Wu-Chien; Tang, Chi-Chieh; Wu, Chia-Ling

    2010-01-01

    Anemia is known to be a significant public health problem in many countries. Most of the available information is incomplete or limited to special groups such as people with intellectual disability. The present study aims to provide the information of anemia prevalence and associated risk factors of children and adolescents with intellectual…

  9. Factors Associated with Incomplete Reporting of HIV and AIDS by Uganda's Surveillance System

    ERIC Educational Resources Information Center

    Akankunda, Denis B.

    2014-01-01

    Background: Over the last 20 years, Uganda has piloted and implemented various management information systems (MIS) for better surveillance of HIV/AIDS. With support from the United States Government, Uganda introduced the District Health Information Software 2 (DHIS2) in 2012. However, districts have yet to fully adapt to this system given a…

  10. Sex-oriented stable matchings of the marriage problem with correlated and incomplete information

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido; Capocci, Andrea; Laureti, Paolo

    2001-10-01

    In the stable marriage problem two sets of agents must be paired according to mutual preferences, which may happen to conflict. We present two generalizations of its sex-oriented version, aiming to take into account correlations between the preferences of agents and costly information. Their effects are investigated both numerically and analytically.

  11. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  12. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  13. Management and land use implications of continuous nitrogen and phosphorus monitoring in a small non-karst catchment in southeastern PA

    USDA-ARS?s Scientific Manuscript database

    Long-term climate and water quality monitoring data provide some of the most essential and informative information to the scientific community. These datasets however, are often incomplete and do not have frequent enough sampling to provide full explanations of trends. With the advent of continuous ...

  14. Role of GLI2 in hypopituitarism phenotype.

    PubMed

    Arnhold, Ivo J P; França, Marcela M; Carvalho, Luciani R; Mendonca, Berenice B; Jorge, Alexander A L

    2015-06-01

    GLI2 is a zinc-finger transcription factor involved in the Sonic Hedgehog pathway. Gli2 mutant mice have hypoplastic anterior and absent posterior pituitary glands. We reviewed the literature for patients with hypopituitarism and alterations in GLI2. Twenty-five patients (16 families) had heterozygous truncating mutations, and the phenotype frequently included GH deficiency, a small anterior pituitary lobe and an ectopic/undescended posterior pituitary lobe on magnetic resonance imaging and postaxial polydactyly. The inheritance pattern was autosomal dominant with incomplete penetrance and variable expressivity. The mutation was frequently inherited from an asymptomatic parent. Eleven patients had heterozygous non-synonymous GLI2 variants that were classified as variants of unknown significance, because they were either absent from or had a frequency lower than 0.001 in the databases. In these patients, the posterior pituitary was also ectopic, but none had polydactyly. A third group of variants found in patients with hypopituitarism were considered benign because their frequency was ≥ 0.001 in the databases. GLI2 is a large and polymorphic gene, and sequencing may identify variants whose interpretation may be difficult. Incomplete penetrance implies in the participation of other genetic and/or environmental factors. An interaction between Gli2 mutations and prenatal ethanol exposure has been demonstrated in mice dysmorphology. In conclusion, a relatively high frequency of GLI2 mutations and variants were identified in patients with congenital GH deficiency without other brain defects, and most of these patients presented with combined pituitary hormone deficiency and an ectopic posterior pituitary lobe. Future studies may clarify the relative role and frequency of GLI2 alterations in the aetiology of hypopituitarism. © 2015 Society for Endocrinology.

  15. Annual Review of Database Developments: 1993.

    ERIC Educational Resources Information Center

    Basch, Reva

    1993-01-01

    Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…

  16. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  17. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  18. Interventions targeted at women to encourage the uptake of cervical screening

    PubMed Central

    Everett, Thomas; Bryant, Andrew; Griffin, Michelle F; Martin-Hirsch, Pierre PL; Forbes, Carol A; Jepson, Ruth G

    2014-01-01

    Background World-wide, cervical cancer is the second most common cancer in women. Increasing the uptake of screening, alongside increasing informed choice is of great importance in controlling this disease through prevention and early detection. Objectives To assess the effectiveness of interventions aimed at women, to increase the uptake, including informed uptake, of cervical cancer screening. Search methods We searched the Cochrane Gynaecological Cancer Group Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL), Issue 1, 2009. MEDLINE, EMBASE and LILACS databases up to March 2009. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field. Selection criteria Randomised controlled trials (RCTs) of interventions to increase uptake/informed uptake of cervical cancer screening. Data collection and analysis Two review authors independently abstracted data and assessed risk of bias. Where possible the data were synthesised in a meta-analysis. Main results Thirty-eight trials met our inclusion criteria. These trials assessed the effectiveness of invitational and educational interventions, counselling, risk factor assessment and procedural interventions. Heterogeneity between trials limited statistical pooling of data. Overall, however, invitations appear to be effective methods of increasing uptake. In addition, there is limited evidence to support the use of educational materials. Secondary outcomes including cost data were incompletely documented so evidence was limited. Most trials were at moderate risk of bias. Informed uptake of cervical screening was not reported in any trials. Authors’ conclusions There is evidence to support the use of invitation letters to increase the uptake of cervical screening. There is limited evidence to support educational interventions but it is unclear what format is most effective. The majority of the studies are from developed countries and so the relevance to developing countries is unclear. PMID:21563135

  19. LAILAPS-QSM: A RESTful API and JAVA library for semantic query suggestions.

    PubMed

    Chen, Jinbo; Scholz, Uwe; Zhou, Ruonan; Lange, Matthias

    2018-03-01

    In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user's registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion.

  20. National Geochemical Database reformatted data from the National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program

    USGS Publications Warehouse

    Smith, Steven M.

    1997-01-01

    The National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program produced a large amount of geochemical data. To fully understand how these data were generated, it is recommended that you read the History of NURE HSSR Program for a summary of the entire program. By the time the NURE program had ended, the HSSR data consisted of 894 separate data files stored with 47 different formats. Many files contained duplication of data found in other files. The University of Oklahoma's Information Systems Programs of the Energy Resources Institute (ISP) was contracted by the Department of Energy to enhance the accessibility and usefulness of the NURE HSSR data. ISP created a single standard-format master file to replace the 894 original files. ISP converted 817 of the 894 original files before its funding apparently ran out. The ISP-reformatted NURE data files have been released by the USGS on CD-ROM (Lower 48 States, Hoffman and Buttleman, 1994; Alaska, Hoffman and Buttleman, 1996). A description of each NURE database field, derived from a draft NURE HSSR data format manual (unpubl. commun., Stan Moll, ISP, Oct 7, 1988), was included in a readme file on each CD-ROM. That original manual was incomplete and assumed that the reformatting process had gone to completion. A lot of vital information was not included. Efforts to correct that manual and the NURE data revealed a large number of problems and missing data. As a result of the frustrating process of cleaning and re-cleaning data from the ISP-reformatted NURE files, a new NURE HSSR data format was developed. This work represents a totally new attempt to reformat the original NURE files into 2 consistent database structures; one for water samples and a second for sediment samples, on a quadrangle by quadrangle basis, from the original NURE files. Although this USGS-reformatted NURE HSSR data format is different than that created by the ISP, many of their ideas were incorporated and expanded in this effort. All of the data from each quadrangle are being examined thoroughly in an attempt to eliminate problems, to combine partial or duplicate records, to convert all coding to a common scheme, and to identify problems even if they can not be solved at this time.

  1. Information management strategies within conversations about cigarette smoking: parenting correlates and longitudinal associations with teen smoking.

    PubMed

    Metzger, Aaron; Wakschlag, Lauren S; Anderson, Ryan; Darfler, Anne; Price, Juliette; Flores, Zujeil; Mermelstein, Robin

    2013-08-01

    The present study examined smoking-specific and general parenting predictors of in vivo observed patterns of parent-adolescent discussion concerning adolescents' cigarette smoking experiences and associations between these observed patterns and 24-month longitudinal trajectories of teen cigarette smoking behavior (nonsmokers, current experimenters, escalators). Parental solicitation, adolescent disclosure, and adolescent information management were coded from direct observations of 528 video-recorded parent-adolescent discussions about cigarette smoking with 344 teens (M age = 15.62 years) with a history of smoking experimentation (321 interactions with mothers, 207 interactions with fathers). Adolescent initiation of discussions concerning their own smoking behavior (21% of interactions) was predicted by lower levels of maternal observed disapproval of cigarette smoking and fewer teen-reported communication problems with mothers. Maternal initiation in discussions (35% of interactions) was associated with higher levels of family rules about illicit substance use. Three categories of adolescent information management (full disclosure, active secrecy, incomplete strategies) were coded by matching adolescents' confidential self-reported smoking status with their observed spontaneous disclosures and responses to parental solicitations. Fully disclosing teens reported higher quality communication with their mothers (more open, less problematic). Teens engaged in active secrecy with their mothers when families had high levels of parental rules about illicit substance use and when mothers expressed lower levels of expectancies that their teen would smoke in the future. Adolescents were more likely to escalate their smoking over 2 years if their parents initiated the discussion of adolescent smoking behavior (solicited) and if adolescents engaged in active secrecy. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  2. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  3. 16 CFR § 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  4. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  5. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Notice and Disclosure... of the contents of the Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database...

  6. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    PubMed

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  7. A functional comparison of basic restraint systems.

    DOT National Transportation Integrated Search

    1967-06-01

    The availability of information necessary to provide realistic solutions for personal safety problems in public and private transportation systems is found to be inadequate and incomplete. The problem of body restraint during the accident event is pu...

  8. 43 CFR 45.42 - When must a party supplement or amend information it has previously provided?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the Interior CONDITIONS AND PRESCRIPTIONS IN FERC HYDROPOWER LICENSES Hearing Process Prehearing... it learns that the response: (1) Was incomplete or incorrect when made; or (2) Though complete and...

  9. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  10. On fairness, full cooperation, and quantum game with incomplete information

    NASA Astrophysics Data System (ADS)

    Lei, Zhen-Zhou; Liu, Bo-Yang; Yi, Ying; Dai, Hong-Yi; Zhang, Ming

    2018-03-01

    Quantum entanglement has emerged as a new resource to enhance cooperation and remove dilemmas. This paper aims to explore conditions under which full cooperation is achievable even when the information of payoff is incomplete. Based on the quantum version of the extended classical cash in a hat game, we demonstrate that quantum entanglement may be used for achieving full cooperation or avoiding moral hazards with the reasonable profit distribution policies even when the profit is uncertain to a certain degree. This research further suggests that the fairness of profit distribution should play an important role in promoting full cooperation. It is hopeful that quantum entanglement and fairness will promote full cooperation among distant people from various interest groups when quantum networks and quantum entanglement are accessible to the public. Project supported by the National Natural Science Foundation of China (Grant Nos. 61673389, 61273202, and 61134008.

  11. Clinical trial transparency: many gains but access to evidence for new medicines remains imperfect.

    PubMed

    Mintzes, Barbara; Lexchin, Joel; Quintano, Ancella Santos

    2015-01-01

    Although selective and incomplete publication is widely acknowledged to be a problem, full access to clinical trial data remains illusive. Authors' personal files, key documents from Food and Drug Administration and European Medicines Agency and focussed searches of PubMed. Existing sources of information provide an incomplete overview of scientific research. Persistent arguments about commercial confidentiality and the potential difficulties in de-identifying raw data can block important progress. Current industry efforts are voluntary and only partially satisfy the need for complete data. Requirements for trial registration are increasing. Important regulatory changes in particular in Europe have the potential to result in the release of more information. Documenting the effects of prospective trial registration and requirements for proactive clinical trial publication on healthcare decisions, public health and rational resource allocation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Field Validation of Food Service Listings: A Comparison of Commercial and Online Geographic Information System Databases

    PubMed Central

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-01-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385

  13. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    PubMed

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  14. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  15. 16 CFR § 1102.42 - Disclaimers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  16. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... information. (2) Manufacturers must submit information electronically through the EPA database system as the... year 2012 the agencies are not prepared to receive information through the EPA database system... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  17. MIPS: analysis and annotation of proteins from whole genomes in 2005

    PubMed Central

    Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839

  18. MIPS: analysis and annotation of proteins from whole genomes in 2005.

    PubMed

    Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).

  19. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  20. Development and Implementation of Kumamoto Technopolis Regional Database T-KIND

    NASA Astrophysics Data System (ADS)

    Onoue, Noriaki

    T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.

  1. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  2. Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)

    USGS Publications Warehouse

    Granato, Gregory E.

    2004-01-01

    In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.

  3. User assumptions about information retrieval systems: Ethical concerns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froehlich, T.J.

    Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less

  4. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  5. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  6. A Database of Historical Information on Landslides and Floods in Italy

    NASA Astrophysics Data System (ADS)

    Guzzetti, F.; Tonelli, G.

    2003-04-01

    For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.

  7. 77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...

  8. Life Cycle Assessment for desalination: a review on methodology feasibility and reliability.

    PubMed

    Zhou, Jin; Chang, Victor W-C; Fane, Anthony G

    2014-09-15

    As concerns of natural resource depletion and environmental degradation caused by desalination increase, research studies of the environmental sustainability of desalination are growing in importance. Life Cycle Assessment (LCA) is an ISO standardized method and is widely applied to evaluate the environmental performance of desalination. This study reviews more than 30 desalination LCA studies since 2000s and identifies two major issues in need of improvement. The first is feasibility, covering three elements that support the implementation of the LCA to desalination, including accounting methods, supporting databases, and life cycle impact assessment approaches. The second is reliability, addressing three essential aspects that drive uncertainty in results, including the incompleteness of the system boundary, the unrepresentativeness of the database, and the omission of uncertainty analysis. This work can serve as a preliminary LCA reference for desalination specialists, but will also strengthen LCA as an effective method to evaluate the environment footprint of desalination alternatives. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The Adolescent Who Does Not Finish Anything.

    ERIC Educational Resources Information Center

    Breiner, Sander J.

    1985-01-01

    Practical information for therapists who deal with adolescents who do not finish tasks is presented. The relationship of task incompletion to neurosis, psychosis, depression, homosexuality, and drug abuse is described, and techniques and guidelines for treatment are provided. (Author)

  10. 40 CFR 1060.255 - What decisions may EPA make regarding my certificate of conformity?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... our presenting a warrant or court order (see 40 CFR 1068.20). This includes a failure to provide... may void your certificate if we find that you intentionally submitted false or incomplete information...

  11. Hello, Goodby.

    ERIC Educational Resources Information Center

    Sheridan, Dixie

    1986-01-01

    A resignation or announcement of a new college president is one of the most difficult but important to control. There is a lot to lose when inaccurate, incomplete, or speculative information is reported. Some suggestions on how to control the news are provided. (MLW)

  12. Maintaining vigilance on a simulated ATC monitoring task across repeated sessions.

    DOT National Transportation Integrated Search

    1994-03-01

    Maintaining alertness to information provided visually is an important aspect of air traffic controllers' work. Improper or incomplete scanning and monitoring behavior is often referred to as one of the causal factors associated with operational erro...

  13. Problems of Policing Plagiarism and Cheating in University Institutions Due to Incomplete or Inconsistent Definitions

    ERIC Educational Resources Information Center

    Soiferman, L. Karen

    2016-01-01

    The purpose of this article was to gain an understanding of the definitions of plagiarism, and cheating that are used in the literature, in institutions, and by students. The information was gathered from a literature review, from university and college websites, and from an informal sampling of students from five different first-year classes. The…

  14. Representations of African Americans in the Grief and Mourning Literature from 1998 to 2014: A Systematic Review.

    PubMed

    Granek, Leeat; Peleg-Sagy, Tal

    2015-01-01

    The authors examined representations of African Americans in the grief literature to assess (a) frequencies; (b) content; and (c) use of universalist or a contextualized framework. They conducted searches in 3 databases that target the grief literature published in the last 15 years. Fifty-nine articles met the criteria. There are a small number of studies published on African Americans and these tend to focus on homicide. Many studies had incomplete methods. Comparison studies were common and pathological grief outcomes that were validated on White populations were used as outcome variables with African American participants.

  15. Improving the estimation of flavonoid intake for study of health outcomes

    PubMed Central

    Dwyer, Johanna T.; Jacques, Paul F.; McCullough, Marjorie L.

    2015-01-01

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing conditions and processing, the challenges in laboratory quantification of flavonoids in foods, the incompleteness of flavonoid food composition tables, and the lack of adequate dietary assessment instruments. Steps to improve databases of bioactive compounds and to increase the accuracy and precision of the estimation of bioactive compound intakes in studies of health benefits and outcomes are suggested. PMID:26084477

  16. Environmental Impact Overview

    NASA Technical Reports Server (NTRS)

    Peddie, Catherine

    2001-01-01

    Aircraft emissions are deposited throughout the atmosphere, and at the lower stratosphere and upper troposphere they have greater potential to change ozone abundance and affect climate. There are significant uncertainties arising from the incomplete knowledge of the composition and evolution of the exhaust emissions, particularly regarding reactive trace species, particles, and their gaseous precursors. NASA Glenn Research Center at Lewis Field has considered its role in answering these challenges and has been committed to strengthening its aerosol/particulate research capabilities with initial emphasis on establishing advanced measurement systems and a particulate database. Activities currently supported by the NASA Ultra-Efficient Engine Technology (UEET) Program and accomplishment up to date will be described.

  17. Efficient network disintegration under incomplete information: the comic effect of link prediction

    NASA Astrophysics Data System (ADS)

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  18. Efficient network disintegration under incomplete information: the comic effect of link prediction.

    PubMed

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-10

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the "comic effect" of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  19. Efficient network disintegration under incomplete information: the comic effect of link prediction

    PubMed Central

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-01-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized. PMID:26960247

  20. Incomplete Hippocampal Inversion: A Comprehensive MRI Study of Over 2000 Subjects.

    PubMed

    Cury, Claire; Toro, Roberto; Cohen, Fanny; Fischer, Clara; Mhaya, Amel; Samper-González, Jorge; Hasboun, Dominique; Mangin, Jean-François; Banaschewski, Tobias; Bokde, Arun L W; Bromberg, Uli; Buechel, Christian; Cattrell, Anna; Conrod, Patricia; Flor, Herta; Gallinat, Juergen; Garavan, Hugh; Gowland, Penny; Heinz, Andreas; Ittermann, Bernd; Lemaitre, Hervé; Martinot, Jean-Luc; Nees, Frauke; Paillère Martinot, Marie-Laure; Orfanos, Dimitri P; Paus, Tomas; Poustka, Luise; Smolka, Michael N; Walter, Henrik; Whelan, Robert; Frouin, Vincent; Schumann, Gunter; Glaunès, Joan A; Colliot, Olivier

    2015-01-01

    The incomplete-hippocampal-inversion (IHI), also known as malrotation, is an atypical anatomical pattern of the hippocampus, which has been reported in healthy subjects in different studies. However, extensive characterization of IHI in a large sample has not yet been performed. Furthermore, it is unclear whether IHI are restricted to the medial-temporal lobe or are associated with more extensive anatomical changes. Here, we studied the characteristics of IHI in a community-based sample of 2008 subjects of the IMAGEN database and their association with extra-hippocampal anatomical variations. The presence of IHI was assessed on T1-weighted anatomical magnetic resonance imaging (MRI) using visual criteria. We assessed the association of IHI with other anatomical changes throughout the brain using automatic morphometry of cortical sulci. We found that IHI were much more frequent in the left hippocampus (left: 17%, right: 6%, χ(2)-test, p < 10(-28)). Compared to subjects without IHI, subjects with IHI displayed morphological changes in several sulci located mainly in the limbic lobe. Our results demonstrate that IHI are a common left-sided phenomenon in normal subjects and that they are associated with morphological changes outside the medial temporal lobe.

  1. Incomplete Hippocampal Inversion: A Comprehensive MRI Study of Over 2000 Subjects

    PubMed Central

    Cury, Claire; Toro, Roberto; Cohen, Fanny; Fischer, Clara; Mhaya, Amel; Samper-González, Jorge; Hasboun, Dominique; Mangin, Jean-François; Banaschewski, Tobias; Bokde, Arun L. W.; Bromberg, Uli; Buechel, Christian; Cattrell, Anna; Conrod, Patricia; Flor, Herta; Gallinat, Juergen; Garavan, Hugh; Gowland, Penny; Heinz, Andreas; Ittermann, Bernd; Lemaitre, Hervé; Martinot, Jean-Luc; Nees, Frauke; Paillère Martinot, Marie-Laure; Orfanos, Dimitri P.; Paus, Tomas; Poustka, Luise; Smolka, Michael N.; Walter, Henrik; Whelan, Robert; Frouin, Vincent; Schumann, Gunter; Glaunès, Joan A.; Colliot, Olivier

    2015-01-01

    The incomplete-hippocampal-inversion (IHI), also known as malrotation, is an atypical anatomical pattern of the hippocampus, which has been reported in healthy subjects in different studies. However, extensive characterization of IHI in a large sample has not yet been performed. Furthermore, it is unclear whether IHI are restricted to the medial-temporal lobe or are associated with more extensive anatomical changes. Here, we studied the characteristics of IHI in a community-based sample of 2008 subjects of the IMAGEN database and their association with extra-hippocampal anatomical variations. The presence of IHI was assessed on T1-weighted anatomical magnetic resonance imaging (MRI) using visual criteria. We assessed the association of IHI with other anatomical changes throughout the brain using automatic morphometry of cortical sulci. We found that IHI were much more frequent in the left hippocampus (left: 17%, right: 6%, χ2−test, p < 10−28). Compared to subjects without IHI, subjects with IHI displayed morphological changes in several sulci located mainly in the limbic lobe. Our results demonstrate that IHI are a common left-sided phenomenon in normal subjects and that they are associated with morphological changes outside the medial temporal lobe. PMID:26733822

  2. E-MSD: an integrated data resource for bioinformatics.

    PubMed

    Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K

    2005-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.

  3. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  4. 75 FR 29155 - Publicly Available Consumer Product Safety Information Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ...The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is issuing a notice of proposed rulemaking that would establish a publicly available consumer product safety information database (``database''). Section 212 of the Consumer Product Safety Improvement Act of 2008 (``CPSIA'') amended the Consumer Product Safety Act (``CPSA'') to require the Commission to establish and maintain a publicly available, searchable database on the safety of consumer products, and other products or substances regulated by the Commission. The proposed rule would interpret various statutory requirements pertaining to the information to be included in the database and also would establish provisions regarding submitting reports of harm; providing notice of reports of harm to manufacturers; publishing reports of harm and manufacturer comments in the database; and dealing with confidential and materially inaccurate information.

  5. The CIS Database: Occupational Health and Safety Information Online.

    ERIC Educational Resources Information Center

    Siegel, Herbert; Scurr, Erica

    1985-01-01

    Describes document acquisition, selection, indexing, and abstracting and discusses online searching of the CIS database, an online system produced by the International Occupational Safety and Health Information Centre. This database comprehensively covers information in the field of occupational health and safety. Sample searches and search…

  6. Vaccine coverage and determinants of incomplete vaccination in children aged 12-23 months in Dschang, West Region, Cameroon: a cross-sectional survey during a polio outbreak.

    PubMed

    Russo, Gianluca; Miglietta, Alessandro; Pezzotti, Patrizio; Biguioh, Rodrigue Mabvouna; Bouting Mayaka, Georges; Sobze, Martin Sanou; Stefanelli, Paola; Vullo, Vincenzo; Rezza, Giovanni

    2015-07-10

    Inadequate immunization coverage with increased risk of vaccine preventable diseases outbreaks remains a problem in Africa. Moreover, different factors contribute to incomplete vaccination status. This study was performed in Dschang (West Region, Cameroon), during the polio outbreak occurred in October 2013, in order to estimate the immunization coverage among children aged 12-23 months, to identify determinants for incomplete vaccination status and to assess the risk of poliovirus spread in the study population. A cross-sectional household survey was conducted in November-December 2013, using the WHO two-stage sampling design. An interviewer-administered questionnaire was used to obtain information from consenting parents of children aged 12-23 months. Vaccination coverage was assessed by vaccination card and parents' recall. Chi-square test and multilevel logistic regression model were used to identify the determinants of incomplete immunization status. Statistical significance was set at p < 0.05. Overall, 3248 households were visited and 502 children were enrolled. Complete immunization coverage was 85.9% and 84.5%, according to card plus parents' recall and card only, respectively. All children had received at least one routine vaccination, the OPV-3 (Oral Polio Vaccine) coverage was >90%, and 73.4% children completed the recommended vaccinations before 1-year of age. In the final multilevel logistic regression model, factors significantly associated with incomplete immunization status were: retention of immunization card (AOR: 7.89; 95% CI: 1.08-57.37), lower mothers' utilization of antenatal care (ANC) services (AOR:1.25; 95% CI: 1.07-63.75), being the ≥ 3(rd) born child in the family (AOR: 425.4; 95% CI: 9.6-18,808), younger mothers' age (AOR: 49.55; 95% CI: 1.59-1544), parents' negative attitude towards immunization (AOR: 20.2; 95% CI: 1.46-278.9), and poorer parents' exposure to information on vaccination (AOR: 28.07; 95 % CI: 2.26-348.1). Longer distance from the vaccination centers was marginally significant (p = 0.05). Vaccination coverage was high; however, 1 out of 7 children was partially vaccinated, and 1 out of 4 did not complete timely the recommended vaccinations. In order to improve the immunization coverage, it is necessary to strengthen ANC services, and to improve parents' information and attitude towards immunization, targeting younger parents and families living far away from vaccination centers, using appropriate communication strategies. Finally, the estimated OPV-3 coverage is reassuring in relation to the ongoing polio outbreak.

  7. Completeness, accuracy, and readability of Wikipedia as a reference for patient medication information.

    PubMed

    Candelario, Danielle M; Vazquez, Victoria; Jackson, William; Reilly, Timothy

    This study determined the completeness, accuracy, and reading level of Wikipedia patient drug information compared with the corresponding United States product insert medication guides. From the Top 200 Drugs of 2012, the top 33 medications with medication guides were analyzed. Medication guides and Wikipedia pages were downloaded on a single date to ensure continuity of Wikipedia content. To quantify the completeness and accuracy of the Wikipedia medication information, a scoring system was adapted from previously published work and compared with the 7 core domains of medication guides. Wikipedia did not provide patient information that was as complete or accurate as the information within the medication guides: 14.73 out of 42 (SD 5.75). Wikipedia medication pages were written at a significantly higher reading level compared with medication guides (Flesch reading ease score 52.93 vs. 33.24 [P <0.001]; Flesch-Kincaid grade level 10.26 vs. 6.86 [P <0.001]). Wikipedia medication pages include incomplete and inaccurate patient information compared with the corresponding product medication guides. Wikipedia patient drug information was also written at reading levels above that of medication guides and substantially above the average United States consumer health literacy level. As the public use of Wikipedia increases, the need for educating patients about the quality of information on Wikipedia and the availability of adequate patient education resources is ever more important to minimize inaccuracies and incomplete information sharing. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  8. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  9. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  10. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  11. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  12. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  13. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  14. The consequences of hospital autonomization in Colombia: a transaction cost economics analysis.

    PubMed

    Castano, Ramon; Mills, Anne

    2013-03-01

    Granting autonomy to public hospitals in developing countries has been common over recent decades, and implies a shift from hierarchical to contract-based relationships with health authorities. Theory on transactions costs in contractual relationships suggests they stem from relationship-specific investments and contract incompleteness. Transaction cost economics argues that the parties involved in exchanges seek to reduce transaction costs. The objective of this research was to analyse the relationships observed between purchasers and the 22 public hospitals of the city of Bogota, Colombia, in order to understand the role of relationship-specific investments and contract incompleteness as sources of transaction costs, through a largely qualitative study. We found that contract-based relationships showed relevant transaction costs associated mainly with contract incompleteness, not with relationship-specific investments. Regarding relationships between insurers and local hospitals for primary care services, compulsory contracting regulations locked-in the parties to the contracts. For high-complexity services (e.g. inpatient care), no restrictions applied and relationships suggested transaction-cost minimizing behaviour. Contract incompleteness was found to be a source of transaction costs on its own. We conclude that transaction costs seemed to play a key role in contract-based relationships, and contract incompleteness by itself appeared to be a source of transaction costs. The same findings are likely in other contexts because of difficulties in defining, observing and verifying the contracted products and the underlying information asymmetries. The role of compulsory contracting might be context-specific, although it is likely to emerge in other settings due to the safety-net role of public hospitals.

  15. Risk factors for massive postpartum bleeding in pregnancies in which incomplete placenta previa are located on the posterior uterine wall

    PubMed Central

    Lee, Hyun Jung; Lee, Young Jai; Ahn, Eun Hee; Kim, Hyeon Chul; Jung, Sang Hee; Chang, Sung Woon

    2017-01-01

    Objective To identify factors associated with massive postpartum bleeding in pregnancies complicated by incomplete placenta previa located on the posterior uterine wall. Methods A retrospective case-control study was performed. We identified 210 healthy singleton pregnancies with incomplete placenta previa located on the posterior uterine wall, who underwent elective or emergency cesarean section after 24 weeks of gestation between January 2006 and April 2016. The cases with intraoperative blood loss (≥2,000 mL) or transfusion of packed red blood cells (≥4) or uterine artery embolization or hysterectomy were defined as massive bleeding. Results Twenty-three women experienced postpartum profuse bleeding (11.0%). After multivariable analysis, 4 variables were associated with massive postpartum hemorrhage (PPH): experience of 2 or more prior uterine curettage (adjusted odds ratio [aOR], 4.47; 95% confidence interval [CI], 1.29 to 15.48; P=0.018), short cervical length before delivery (<2.0 cm) (aOR, 7.13; 95% CI, 1.01 to 50.25; P=0.049), fetal non-cephalic presentation (aOR, 12.48; 95% CI, 1.29 to 121.24; P=0.030), and uteroplacental hypervascularity (aOR, 6.23; 95% CI, 2.30 to 8.83; P=0.001). Conclusion This is the first study of cases with incomplete placenta previa located on the posterior uterine wall, which were complicated by massive PPH. Our findings might be helpful to guide obstetric management and provide useful information for prediction of massive PPH in pregnancies with incomplete placenta previa located on the posterior uterine wall. PMID:29184859

  16. Risk factors for massive postpartum bleeding in pregnancies in which incomplete placenta previa are located on the posterior uterine wall.

    PubMed

    Lee, Hyun Jung; Lee, Young Jai; Ahn, Eun Hee; Kim, Hyeon Chul; Jung, Sang Hee; Chang, Sung Woon; Lee, Ji Yeon

    2017-11-01

    To identify factors associated with massive postpartum bleeding in pregnancies complicated by incomplete placenta previa located on the posterior uterine wall. A retrospective case-control study was performed. We identified 210 healthy singleton pregnancies with incomplete placenta previa located on the posterior uterine wall, who underwent elective or emergency cesarean section after 24 weeks of gestation between January 2006 and April 2016. The cases with intraoperative blood loss (≥2,000 mL) or transfusion of packed red blood cells (≥4) or uterine artery embolization or hysterectomy were defined as massive bleeding. Twenty-three women experienced postpartum profuse bleeding (11.0%). After multivariable analysis, 4 variables were associated with massive postpartum hemorrhage (PPH): experience of 2 or more prior uterine curettage (adjusted odds ratio [aOR], 4.47; 95% confidence interval [CI], 1.29 to 15.48; P =0.018), short cervical length before delivery (<2.0 cm) (aOR, 7.13; 95% CI, 1.01 to 50.25; P =0.049), fetal non-cephalic presentation (aOR, 12.48; 95% CI, 1.29 to 121.24; P =0.030), and uteroplacental hypervascularity (aOR, 6.23; 95% CI, 2.30 to 8.83; P =0.001). This is the first study of cases with incomplete placenta previa located on the posterior uterine wall, which were complicated by massive PPH. Our findings might be helpful to guide obstetric management and provide useful information for prediction of massive PPH in pregnancies with incomplete placenta previa located on the posterior uterine wall.

  17. Potentials of Advanced Database Technology for Military Information Systems

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010866 TITLE: Potentials of Advanced Database Technology for Military... Technology for Military Information Systems Sunil Choennia Ben Bruggemanb a National Aerospace Laboratory, NLR, P.O. Box 90502, 1006 BM Amsterdam...application of advanced information tech- nology, including database technology , as underpin- actions X and Y as dangerous or not? ning is

  18. [Medical prescription and informed consent for the use of physical restraints in nursing homes in the Canary Islands (Spain)].

    PubMed

    Estévez-Guerra, Gabriel J; Fariña-López, Emilio; Penelo, Eva

    To identify the frequency of completion of informed consent and medical prescription in the clinical records of older patients subject to physical restraint, and to analyse the association between patient characteristics and the absence of such documentation. A cross-sectional and descriptive multicentre study with direct observation and review of clinical records was conducted in nine public nursing homes, comprising 1,058 beds. 274 residents were physically restrained. Informed consent was not included in 82.5% of cases and was incomplete in a further 13.9%. There was no medical prescription in 68.3% of cases and it was incomplete in a further 12.0%. The only statistical association found was between the lack of prescription and the patients' advanced age (PR=1.03; p <0.005). Failure to produce this documentation contravenes the law. Organisational characteristics, ignorance of the legal requirements or the fact that some professionals may consider physical restraint to be a risk-free procedure may explain these results. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Effectiveness of Using Mobile Phone Image Capture for Collecting Secondary Data: A Case Study on Immunization History Data Among Children in Remote Areas of Thailand.

    PubMed

    Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat

    2015-07-20

    Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.

  20. 48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...

  1. 48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...

  2. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    ERIC Educational Resources Information Center

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  3. 77 FR 47690 - 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...

  4. Integrated Primary Care Information Database (IPCI)

    Cancer.gov

    The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.

  5. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  6. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    PubMed

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.

  7. Database Systems. Course Three. Information Systems Curriculum.

    ERIC Educational Resources Information Center

    O'Neil, Sharon Lund; Everett, Donna R.

    This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…

  8. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  9. Tourism through Travel Club: A Database Project

    ERIC Educational Resources Information Center

    Pratt, Renée M. E.; Smatt, Cindi T.; Wynn, Donald E.

    2017-01-01

    This applied database exercise utilizes a scenario-based case study to teach the basics of Microsoft Access and database management in introduction to information systems and introduction to database course. The case includes background information on a start-up business (i.e., Carol's Travel Club), description of functional business requirements,…

  10. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  11. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  12. 75 FR 4827 - Submission for OMB Review; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ...; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI) Summary: Under the provisions of... Collection: Title: Clinical Trials Reporting Program (CTRP) Database. Type of Information Collection Request... Program (CTRP) Database, to serve as a single, definitive source of information about all NCI-supported...

  13. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  14. Marine and Hydrokinetic Data | Geospatial Data Science | NREL

    Science.gov Websites

    . wave energy resource using a 51-month Wavewatch III hindcast database developed by the National Database The U.S. Department of Energy's Marine and Hydrokinetic Technology Database provides information database includes wave, tidal, current, and ocean thermal energy and contains information about energy

  15. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  16. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  17. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  18. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  19. Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*

    PubMed Central

    Karaivanov, Alexander; Townsend, Robert M.

    2014-01-01

    We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710

  20. Improving Student Question Classification

    ERIC Educational Resources Information Center

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  1. 50 CFR 679.6 - Exempted fisheries.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...

  2. 50 CFR 679.6 - Exempted fisheries.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...

  3. 50 CFR 679.6 - Exempted fisheries.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...

  4. 50 CFR 679.6 - Exempted fisheries.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...

  5. 50 CFR 679.6 - Exempted fisheries.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...

  6. Technology Assessment for Powertrain Components Final Report CRADA No. TC-1124-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokarz, F.; Gough, C.

    LLNL utilized its defense technology assessment methodologies in combination with its capabilities in the energy; manufacturing, and transportation technologies to demonstrate a methodology that synthesized available but incomplete information on advanced automotive technologies into a comprehensive framework.

  7. Household Products Database: Personal Care

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2018 by DeLima Associates. All rights reserved. ...

  8. Household Products Database: Pesticides

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2018 by DeLima Associates. All rights reserved. ...

  9. MRNIDX - Marine Data Index: Database Description, Operation, Retrieval, and Display

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1982-01-01

    A database referencing the location and content of data stored on magnetic medium was designed to assist in the indexing of time-series and spatially dependent marine geophysical data collected or processed by the U. S. Geological Survey. The database was designed and created for input to the Geologic Retrieval and Synopsis Program (GRASP) to allow selective retrievals of information pertaining to location of data, data format, cruise, geographical bounds and collection dates of data. This information is then used to locate the stored data for administrative purposes or further processing. Database utilization is divided into three distinct operations. The first is the inventorying of the data and the updating of the database, the second is the retrieval of information from the database, and the third is the graphic display of the geographical boundaries to which the retrieved information pertains.

  10. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    PubMed

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  11. An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu

    PubMed Central

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723

  12. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    USGS Publications Warehouse

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  13. Compliance of systematic reviews in veterinary journals with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) literature search reporting guidelines.

    PubMed

    Toews, Lorraine C

    2017-07-01

    Complete, accurate reporting of systematic reviews facilitates assessment of how well reviews have been conducted. The primary objective of this study was to examine compliance of systematic reviews in veterinary journals with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for literature search reporting and to examine the completeness, bias, and reproducibility of the searches in these reviews from what was reported. The second objective was to examine reporting of the credentials and contributions of those involved in the search process. A sample of systematic reviews or meta-analyses published in veterinary journals between 2011 and 2015 was obtained by searching PubMed. Reporting in the full text of each review was checked against certain PRISMA checklist items. Over one-third of reviews (37%) did not search the CAB Abstracts database, and 9% of reviews searched only 1 database. Over two-thirds of reviews (65%) did not report any search for grey literature or stated that they excluded grey literature. The majority of reviews (95%) did not report a reproducible search strategy. Most reviews had significant deficiencies in reporting the search process that raise questions about how these searches were conducted and ultimately cast serious doubts on the validity and reliability of reviews based on a potentially biased and incomplete body of literature. These deficiencies also highlight the need for veterinary journal editors and publishers to be more rigorous in requiring adherence to PRISMA guidelines and to encourage veterinary researchers to include librarians or information specialists on systematic review teams to improve the quality and reporting of searches.

  14. The Ocean Observatories Initiative: Data pre-Processing: Diagnostic Tools to Prepare Data for QA/QC Processing.

    NASA Astrophysics Data System (ADS)

    Belabbassi, L.; Garzio, L. M.; Smith, M. J.; Knuth, F.; Vardaro, M.; Kerfoot, J.

    2016-02-01

    The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, provides users with access to long-term datasets from a variety of deployed oceanographic sensors. The Pioneer Array in the Atlantic Ocean off the Coast of New England hosts 10 moorings and 6 gliders. Each mooring is outfitted with 6 to 19 different instruments telemetering more than 1000 data streams. These data are available to science users to collaborate on common scientific goals such as water quality monitoring and scale variability measures of continental shelf processes and coastal open ocean exchanges. To serve this purpose, the acquired datasets undergo an iterative multi-step quality assurance and quality control procedure automated to work with all types of data. Data processing involves several stages, including a fundamental pre-processing step when the data are prepared for processing. This takes a considerable amount of processing time and is often not given enough thought in development initiatives. The volume and complexity of OOI data necessitates the development of a systematic diagnostic tool to enable the management of a comprehensive data information system for the OOI arrays. We present two examples to demonstrate the current OOI pre-processing diagnostic tool. First, Data Filtering is used to identify incomplete, incorrect, or irrelevant parts of the data and then replaces, modifies or deletes the coarse data. This provides data consistency with similar datasets in the system. Second, Data Normalization occurs when the database is organized in fields and tables to minimize redundancy and dependency. At the end of this step, the data are stored in one place to reduce the risk of data inconsistency and promote easy and efficient mapping to the database.

  15. Botany, ethnomedicines, phytochemistry and pharmacology of Himalayan paeony (Paeonia emodi Royle.).

    PubMed

    Ahmad, Mushtaq; Malik, Khafsa; Tariq, Akash; Zhang, Guolin; Yaseen, Ghulam; Rashid, Neelam; Sultana, Shazia; Zafar, Muhammad; Ullah, Kifayat; Khan, Muhammad Pukhtoon Zada

    2018-06-28

    Himalayan paeony (Paeonia emodi Royle.) is an important species used to treat various diseases. This study aimed to compile the detailed traditional medicinal uses, phytochemistry, pharmacology and toxicological investigations on P. emodi. This study also highlights taxonomic validity, quality of experimental designs and shortcomings in previously reported information on Himalayan paeony. The data was extracted from unpublished theses (Pakistan, China, India and Nepal), and different published research articles confined to pharmacology, phytochemistry and antimicrobial activities using different databases through specific keywords. The relevant information regarding medicinal uses, taxonomic/common names, part used, collection and identification source, authentication, voucher specimen number, plant extracts and their characterization, isolation and identification of phytochemicals, methods of study in silico, in vivo or in vitro, model organism used, dose and duration, minimal active concentration, zone of inhibition (antimicrobial study), bioactive compound(s), mechanism of action on single or multiple targets, and toxicological information. P. emodi is reported for diverse medicinal uses with pharmacological properties like antioxidant, nephroprotective, lipoxygenase inhibitory, cognition and oxidative stress release, cytotoxic, anti-inflammatory, antiepileptic, anticonvulsant, haemaglutination, alpha-chymotrypsin inhibitory, hepatoprotective, hepatic chromes and pharmacokinetics of carbamazepine expression, β-glucuronidase inhibitory, spasmolytic and spasmogenic, and airway relaxant. Data confined to its taxonomic validity, shows 10% studies with correct taxonomic name while 90% studies with incorrect taxonomic, pharmacopeial and common names. The literature reviewed, shows lack of collection source (11 reports), without proper source of identification (15 reports), 33 studies without voucher specimen number, 26 reports lack information on authentic herbarium submission and most of the studies (90%) without validation of taxonomic names using recognized databases. In reported methods, 67% studies without characterization of extracts, 25% lack proper dose, 40% without duration and 31% reports lack information on proper controls. Similarly, only 18% studies reports active compound(s) responsible for pharmacological activities, 14% studies show minimal active concentration, only 2.5% studies report mechanism of action on target while none of the reports mentioned in silico approach. P. emodi is endemic to Himalayan region (Pakistan, China, India and Nepal) with diverse traditional therapeutic uses. Majority of reviewed studies showed confusion in its taxonomic validity, incomplete methodologies and ambiguous findings. Keeping in view the immense uses of P. emodi in various traditional medicinal systems, holistic pharmacological approaches in combination with reverse pharmacology, system biology, and "omics" technologies are recommended to improve the quality of research which leads to natural drug discovery development at global perspectives. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  17. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...

  18. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...

  19. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-Central Contractor Registration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... contained in the Central Contractor Registration (CCR) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the CCR database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the CCR database is incorrect...

  20. The California Oak Disease and Arthropod (CODA) Database

    Treesearch

    Tedmund J. Swiecki; Elizabeth A. Bernhardt; Richard A. Arnold

    1997-01-01

    The California Oak Disease and Arthropod (CODA) host index database is a compilation of information on agents that colonize or feed on oaks in California. Agents in the database include plant-feeding insects and mites, nematodes, microorganisms, viruses, and abiotic disease agents. CODA contains summarized information on hosts, agents, information sources, and the...

  1. RDIS: The Rabies Disease Information System.

    PubMed

    Dharmalingam, Baskeran; Jothi, Lydia

    2015-01-01

    Rabies is a deadly viral disease causing acute inflammation or encephalitis of the brain in human beings and other mammals. Therefore, it is of interest to collect information related to the disease from several sources including known literature databases for further analysis and interpretation. Hence, we describe the development of a database called the Rabies Disease Information System (RDIS) for this purpose. The online database describes the etiology, epidemiology, pathogenesis and pathology of the disease using diagrammatic representations. It provides information on several carriers of the rabies viruses like dog, bat, fox and civet, and their distributions around the world. Information related to the urban and sylvatic cycles of transmission of the virus is also made available. The database also contains information related to available diagnostic methods and vaccines for human and other animals. This information is of use to medical, veterinary and paramedical practitioners, students, researchers, pet owners, animal lovers, livestock handlers, travelers and many others. The database is available for free http://rabies.mscwbif.org/home.html.

  2. Databases and Associated Tools for Glycomics and Glycoproteomics.

    PubMed

    Lisacek, Frederique; Mariethoz, Julien; Alocci, Davide; Rudd, Pauline M; Abrahams, Jodie L; Campbell, Matthew P; Packer, Nicolle H; Ståhle, Jonas; Widmalm, Göran; Mullen, Elaine; Adamczyk, Barbara; Rojas-Macias, Miguel A; Jin, Chunsheng; Karlsson, Niclas G

    2017-01-01

    The access to biodatabases for glycomics and glycoproteomics has proven to be essential for current glycobiological research. This chapter presents available databases that are devoted to different aspects of glycobioinformatics. This includes oligosaccharide sequence databases, experimental databases, 3D structure databases (of both glycans and glycorelated proteins) and association of glycans with tissue, disease, and proteins. Specific search protocols are also provided using tools associated with experimental databases for converting primary glycoanalytical data to glycan structural information. In particular, researchers using glycoanalysis methods by U/HPLC (GlycoBase), MS (GlycoWorkbench, UniCarb-DB, GlycoDigest), and NMR (CASPER) will benefit from this chapter. In addition we also include information on how to utilize glycan structural information to query databases that associate glycans with proteins (UniCarbKB) and with interactions with pathogens (SugarBind).

  3. Practice databases and their uses in clinical research.

    PubMed

    Tierney, W M; McDonald, C J

    1991-04-01

    A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.

  4. 16 CFR § 1102.4 - Scope.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.4 Scope. This part... Product Safety Information Database, including all information published therein. ...

  5. E-MSD: an integrated data resource for bioinformatics

    PubMed Central

    Velankar, S.; McNeil, P.; Mittard-Runte, V.; Suarez, A.; Barrell, D.; Apweiler, R.; Henrick, K.

    2005-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the ‘Structure Integration with Function, Taxonomy and Sequences (SIFTS)’ initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group. PMID:15608192

  6. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  7. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  8. Understanding youthful risk taking and driving : database report

    DOT National Transportation Integrated Search

    1995-11-01

    This report catalogs national databases that contain information about adolescents and risk taking behaviors. It contains descriptions of the major areas, unique characteristics, and risk-related aspects of each database. Detailed information is prov...

  9. Understanding Youthful Risk Taking and Driving: Database Report

    DOT National Transportation Integrated Search

    1995-11-01

    This report catalogs national databases that contain information about adolescents and risk taking behaviors. It contains descriptions of the major areas, unique characteristics, and risk-related aspects of each database. Detailed information is prov...

  10. Distribution Grid Integration Unit Cost Database | Solar Research | NREL

    Science.gov Websites

    Unit Cost Database Distribution Grid Integration Unit Cost Database NREL's Distribution Grid Integration Unit Cost Database contains unit cost information for different components that may be used to associated with PV. It includes information from the California utility unit cost guides on traditional

  11. Information Literacy Skills: Comparing and Evaluating Databases

    ERIC Educational Resources Information Center

    Grismore, Brian A.

    2012-01-01

    The purpose of this database comparison is to express the importance of teaching information literacy skills and to apply those skills to commonly used Internet-based research tools. This paper includes a comparison and evaluation of three databases (ProQuest, ERIC, and Google Scholar). It includes strengths and weaknesses of each database based…

  12. 24 CFR 81.72 - Public-use database and public information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Public-use database and public... Public-use database and public information. (a) General. Except as provided in paragraph (c) of this section, the Secretary shall establish and make available for public use, a public-use database containing...

  13. 24 CFR 81.72 - Public-use database and public information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Public-use database and public... Public-use database and public information. (a) General. Except as provided in paragraph (c) of this section, the Secretary shall establish and make available for public use, a public-use database containing...

  14. 24 CFR 81.72 - Public-use database and public information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Public-use database and public... Public-use database and public information. (a) General. Except as provided in paragraph (c) of this section, the Secretary shall establish and make available for public use, a public-use database containing...

  15. 24 CFR 81.72 - Public-use database and public information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Public-use database and public... Public-use database and public information. (a) General. Except as provided in paragraph (c) of this section, the Secretary shall establish and make available for public use, a public-use database containing...

  16. 24 CFR 81.72 - Public-use database and public information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Public-use database and public... Public-use database and public information. (a) General. Except as provided in paragraph (c) of this section, the Secretary shall establish and make available for public use, a public-use database containing...

  17. Fish Karyome: A karyological information network database of Indian Fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  18. Protein Information Resource: a community resource for expert annotation of protein data

    PubMed Central

    Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy

    2001-01-01

    The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-Inter­national databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041

  19. Establishment of an international database for genetic variants in esophageal cancer.

    PubMed

    Vihinen, Mauno

    2016-10-01

    The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.

  20. 16 CFR 1102.4 - Scope.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions § 1102.4 Scope... Available Consumer Product Safety Information Database, including all information published therein. ...

  1. Chinese herbal medicines for hypercholesterolemia

    PubMed Central

    Liu, Zhao Lan; Liu, Jian Ping; Zhang, Anthony Lin; Wu, Qiong; Ruan, Yao; Lewith, George; Visconte, Denise

    2011-01-01

    Background Hypercholesterolemia is an important key contributory factor for ischemic heart disease and is associated with age, high blood pressure, a family history of hypercholesterolemia, and diabetes. Chinese herbal medicines have been used for a long time as lipid-lowering agents. Objectives To assess the effects of Chinese herbal medicines on hypercholesterolemia. Search strategy We searched the following databases: The Cochrane Library (issue 8, 2010), MEDLINE (until July 2010), EMBASE (until July 2010), Chinese BioMedical Database (until July 2010), Traditional Chinese Medical Literature Analysis and Retrieval System (until July 2010), China National Knowledge Infrastructure (until July 2010), Chinese VIP Information (until July 2010), Chinese Academic Conference Papers Database and Chinese Dissertation Database (until July 2010), and Allied and Complementary Medicine Database (until July 2010). Selection criteria We considered randomized controlled clinical trials in hypercholesterolemic participants comparing Chinese herbal medicines with placebo, no treatment, and pharmacological or non-pharmacological interventions. Data collection and analysis Two review authors independently extracted data and assessed the risk of bias. We resolved any disagreements with this assessment through discussion and a decision was achieved based by consensus. We assessed trials for the risk of bias against key criteria: random sequence generation, allocation concealment, blinding of participants, incomplete outcome data, selective outcome reporting and other sources of bias. Main results We included 22 randomized trials (2130 participants). The mean treatment duration was 2.3 ± 1.3 months (ranging from one to six months). Twenty trials were conducted in China and 18 trials were published in Chinese. Overall, the risk of bias of included trials was high or unclear. Five different herbal medicines were evaluated in the included trials, which compared herbs with conventional medicine in six comparisons (20 trials), or placebo (two trials). There were no outcome data in any of the trials on cardiovascular events and death from any cause. One trial each reported well-being (no significant differences) and economic costs. No serious adverse events were observed. Xuezhikang was the most commonly used herbal formula investigated. A significant effect on total cholesterol (two trial, 254 participants) was shown in favor of Xuezhikang when compared with inositol nicotinate (mean difference (MD) −0.90 mmol/L, 95% confidence interval (CI) −1.13 to −0.68) . Authors’ conclusions Some herbal medicines may have cholesterol-lowering effects. Our findings have to be interpreted with caution due to high or unclear risk of bias of the included trials. PMID:21735427

  2. The Genomes On Line Database (GOLD) v.2: a monitor of genome projects worldwide

    PubMed Central

    Liolios, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-01-01

    The Genomes On Line Database (GOLD) is a web resource for comprehensive access to information regarding complete and ongoing genome sequencing projects worldwide. The database currently incorporates information on over 1500 sequencing projects, of which 294 have been completed and the data deposited in the public databases. GOLD v.2 has been expanded to provide information related to organism properties such as phenotype, ecotype and disease. Furthermore, project relevance and availability information is now included. GOLD is available at . It is also mirrored at the Institute of Molecular Biology and Biotechnology, Crete, Greece at PMID:16381880

  3. Geometrical analysis of Cys-Cys bridges in proteins and their prediction from incomplete structural information

    NASA Technical Reports Server (NTRS)

    Goldblum, A.; Rein, R.

    1987-01-01

    Analysis of C-alpha atom positions from cysteines involved in disulphide bridges in protein crystals shows that their geometric characteristics are unique with respect to other Cys-Cys, non-bridging pairs. They may be used for predicting disulphide connections in incompletely determined protein structures, such as low resolution crystallography or theoretical folding experiments. The basic unit for analysis and prediction is the 3 x 3 distance matrix for Cx positions of residues (i - 1), Cys(i), (i +1) with (j - 1), Cys(j), (j + 1). In each of its columns, row and diagonal vector--outer distances are larger than the central distance. This analysis is compared with some analytical models.

  4. Adversarial risk analysis with incomplete information: a level-k approach.

    PubMed

    Rothschild, Casey; McLay, Laura; Guikema, Seth

    2012-07-01

    This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.

  5. Review of the genus Tenuipalpus (Acari: Tenuipalpidae)

    USDA-ARS?s Scientific Manuscript database

    Tenuipalpus Donnadieu is the most speciose genus of the family Tenuipalpidae, with over 300 described species. The descriptions of many of these species are incomplete, and lack important information necessary for accurate species identification. The objective of this study was to re-describe specie...

  6. Let the Buyer Beware.

    ERIC Educational Resources Information Center

    Hamilton, Jack A.; Wheeler, Jeanette D.

    1979-01-01

    A literature search indicates that many adult consumers of postsecondary education need protection from abusive practices by some schools, such as misleading advertising, incomplete information, inferior facilities, false promises, etc. The article provides a checklist for potential adult students to use in making decisions about choosing a…

  7. Algebraic Methods to Design Signals

    DTIC Science & Technology

    2015-08-27

    sequence pairs with optimal correlation values. 5. K.T. Arasu, Pradeep Bansal , Cody Watson, Partially balanced incomplete block designs with two...IEEE Transactions Information Theory, Volume: 58, Issue: 11, Nov 2012, Page(s): 6968 – 6978 5. K.T. Arasu, Pradeep Bansal , Cody Watson, Partially

  8. Liverome: a curated database of liver cancer-related gene signatures with self-contained context information.

    PubMed

    Lee, Langho; Wang, Kai; Li, Gang; Xie, Zhi; Wang, Yuli; Xu, Jiangchun; Sun, Shaoxian; Pocalyko, David; Bhak, Jong; Kim, Chulhong; Lee, Kee-Ho; Jang, Ye Jin; Yeom, Young Il; Yoo, Hyang-Sook; Hwang, Seungwoo

    2011-11-30

    Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide. A number of molecular profiling studies have investigated the changes in gene and protein expression that are associated with various clinicopathological characteristics of HCC and generated a wealth of scattered information, usually in the form of gene signature tables. A database of the published HCC gene signatures would be useful to liver cancer researchers seeking to retrieve existing differential expression information on a candidate gene and to make comparisons between signatures for prioritization of common genes. A challenge in constructing such database is that a direct import of the signatures as appeared in articles would lead to a loss or ambiguity of their context information that is essential for a correct biological interpretation of a gene's expression change. This challenge arises because designation of compared sample groups is most often abbreviated, ad hoc, or even missing from published signature tables. Without manual curation, the context information becomes lost, leading to uninformative database contents. Although several databases of gene signatures are available, none of them contains informative form of signatures nor shows comprehensive coverage on liver cancer. Thus we constructed Liverome, a curated database of liver cancer-related gene signatures with self-contained context information. Liverome's data coverage is more than three times larger than any other signature database, consisting of 143 signatures taken from 98 HCC studies, mostly microarray and proteome, and involving 6,927 genes. The signatures were post-processed into an informative and uniform representation and annotated with an itemized summary so that all context information is unambiguously self-contained within the database. The signatures were further informatively named and meaningfully organized according to ten functional categories for guided browsing. Its web interface enables a straightforward retrieval of known differential expression information on a query gene and a comparison of signatures to prioritize common genes. The utility of Liverome-collected data is shown by case studies in which useful biological insights on HCC are produced. Liverome database provides a comprehensive collection of well-curated HCC gene signatures and straightforward interfaces for gene search and signature comparison as well. Liverome is available at http://liverome.kobic.re.kr.

  9. Concentrations of indoor pollutants database: User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author's last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user's needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  10. Concentrations of indoor pollutants database: User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  11. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  12. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  13. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Dezaki, Kyoko; Saeki, Makoto

    Rapid progress in advanced informationalization has increased need to enforce documentation activities in industries. Responding to it Tokin Corporation has been engaged in database construction for patent information, technical reports and so on accumulated inside the Company. Two results are obtained; One is TOPICS, inhouse patent information management system, the other is TOMATIS, management and technical information system by use of personal computers and all-purposed relational database software. These systems aim at compiling databases of patent and technological management information generated internally and externally by low labor efforts as well as low cost, and providing for comprehensive information company-wide. This paper introduces the outline of these systems and how they are actually used.

  14. An original imputation technique of missing data for assessing exposure of newborns to perchlorate in drinking water.

    PubMed

    Caron, Alexandre; Clement, Guillaume; Heyman, Christophe; Aernout, Eva; Chazard, Emmanuel; Le Tertre, Alain

    2015-01-01

    Incompleteness of epidemiological databases is a major drawback when it comes to analyzing data. We conceived an epidemiological study to assess the association between newborn thyroid function and the exposure to perchlorates found in the tap water of the mother's home. Only 9% of newborn's exposure to perchlorate was known. The aim of our study was to design, test and evaluate an original method for imputing perchlorate exposure of newborns based on their maternity of birth. In a first database, an exhaustive collection of newborn's thyroid function measured during a systematic neonatal screening was collected. In this database the municipality of residence of the newborn's mother was only available for 2012. Between 2004 and 2011, the closest data available was the municipality of the maternity of birth. Exposure was assessed using a second database which contained the perchlorate levels for each municipality. We computed the catchment area of every maternity ward based on the French nationwide exhaustive database of inpatient stay. Municipality, and consequently perchlorate exposure, was imputed by a weighted draw in the catchment area. Missing values for remaining covariates were imputed by chained equation. A linear mixture model was computed on each imputed dataset. We compared odds ratios (ORs) and 95% confidence intervals (95% CI) estimated on real versus imputed 2012 data. The same model was then carried out for the whole imputed database. The ORs estimated on 36,695 observations by our multiple imputation method are comparable to the real 2012 data. On the 394,979 observations of the whole database, the ORs remain stable but the 95% CI tighten considerably. The model estimates computed on imputed data are similar to those calculated on real data. The main advantage of multiple imputation is to provide unbiased estimate of the ORs while maintaining their variances. Thus, our method will be used to increase the statistical power of future studies by including all 394,979 newborns.

  15. RESIS-II: An Updated Version of the Original Reservoir Sedimentation Survey Information System (RESIS) Database

    USGS Publications Warehouse

    Ackerman, Katherine V.; Mixon, David M.; Sundquist, Eric T.; Stallard, Robert F.; Schwarz, Gregory E.; Stewart, David W.

    2009-01-01

    The Reservoir Sedimentation Survey Information System (RESIS) database, originally compiled by the Soil Conservation Service (now the Natural Resources Conservation Service) in collaboration with the Texas Agricultural Experiment Station, is the most comprehensive compilation of data from reservoir sedimentation surveys throughout the conterminous United States (U.S.). The database is a cumulative historical archive that includes data from as early as 1755 and as late as 1993. The 1,823 reservoirs included in the database range in size from farm ponds to the largest U.S. reservoirs (such as Lake Mead). Results from 6,617 bathymetric surveys are available in the database. This Data Series provides an improved version of the original RESIS database, termed RESIS-II, and a report describing RESIS-II. The RESIS-II relational database is stored in Microsoft Access and includes more precise location coordinates for most of the reservoirs than the original database but excludes information on reservoir ownership. RESIS-II is anticipated to be a template for further improvements in the database.

  16. Consumer Product Category Database

    EPA Pesticide Factsheets

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  17. 16 CFR 1102.16 - Additional information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.16 Additional... in the Database any additional information it determines to be in the public interest, consistent...

  18. 16 CFR 1102.16 - Additional information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.16 Additional... in the Database any additional information it determines to be in the public interest, consistent...

  19. The impact of automating laboratory request forms on the quality of healthcare services.

    PubMed

    Dogether, Majed Al; Muallem, Yahya Al; Househ, Mowafa; Saddik, Basema; Khalifa, Mohamed

    In recent decades, healthcare organizations have undergone a significant transformation with the integration of Information and Communication Technologies within healthcare operations to improve healthcare services. Various technologies such as Hospital Information Systems (HIS), Electronic Health Records (EHR) and Laboratory Information Systems (LIS) have been incorporated into healthcare services. The aim of this study is to evaluate the completeness of outpatients' laboratory paper based request forms in comparison with a electronic laboratory request system. This study was carried out in the laboratory department at King Abdulaziz Medical City (KAMC), National Guard Health Affairs, Riyadh, Saudi Arabia. We used a sample size calculator for comparing two proportions. We estimated the sample size to be 228 for each group. Any laboratory requests including paper and electronic forms were included. We categorized the clarity of the forms into understandable, readable, and unclear. A total of 57 incomplete paper forms or 25% were identified as being incomplete. For electronic forms, there were no incomplete fields, as all fields were mandatory, therefore, rendering them complete. The total of understandable paper-based laboratory forms was 11.4%. Additionally, it was found that the total of readable was 33.8% and the total for unclear was 54.8%, while for electronic-based forms, there were no unclear forms. Electronic based laboratory forms provide a more complete, accurate, clear, and understandable format than paper-based laboratory records. Based on these findings, KAMC should move toward the implementation of electronic-based laboratory request forms for the outpatient laboratory department. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  20. SGP-1: Prediction and Validation of Homologous Genes Based on Sequence Alignments

    PubMed Central

    Wiehe, Thomas; Gebauer-Jung, Steffi; Mitchell-Olds, Thomas; Guigó, Roderic

    2001-01-01

    Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors. PMID:11544202

Top