Archetype relational mapping - a practical openEHR persistence solution.
Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong
2015-11-05
One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Kaduk, James A.
1996-01-01
The crystallographic databases are powerful and cost-effective tools for solving materials identification problems, both individually and in combination. Examples of the conventional and unconventional use of the databases in solving practical problems involving organic, coordination, and inorganic compounds are provided. The creation and use of fully-relational versions of the Powder Diffraction File and NIST Crystal Data are described. PMID:27805165
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases
Dinu, Valentin; Nadkarni, Prakash
2007-01-01
Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
Zhang, S S; Zhang, Y; Di, P; Lin, Y
2017-05-09
Objective: To evaluate the effect of implant related treatment on the oral health related quality of life (OHRQoL) of edentulous patients. Methods: The CNKI, Wanfang database and Medline, EMBASE, Cochrane Library databases that include randomized clinical trials comparing implant supported overdentures with conventional complete denture for edentulous patients were retrived. Nine studies involving 769 cases were included and meta-analysis was conducted. Results: The standardized mean difference (SMD) of oral health impact profile (OHIP) score was 1.63 (95% CI: 1.25-2.02) and improved after implant related treatment, which was significantly better than the conventional complete denture (0.87, 95% CI: 0.54-1.20). Conclusions: Implant supported overdentures improved patient's OHRQoL and showed better performance compared to the overdentures complete dentures.
Data structures and organisation: Special problems in scientific applications
NASA Astrophysics Data System (ADS)
Read, Brian J.
1989-12-01
In this paper we discuss and offer answers to the following questions: What, really, are the benifits of databases in physics? Are scientific databases essentially different from conventional ones? What are the drawbacks of a commercial database management system for use with scientific data? Do they outweigh the advantages? Do databases systems have adequate graphics facilities, or is a separate graphics package necessary? SQL as a standard language has deficiencies, but what are they for scientific data in particular? Indeed, is the relational model appropriate anyway? Or, should we turn to object oriented databases?
Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze
2013-04-01
Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ishihara, Masaru; Onoguchi, Masahisa; Taniguchi, Yasuyo; Shibutani, Takayuki
2017-12-01
The aim of this study was to clarify the differences in thallium-201-chloride (thallium-201) myocardial perfusion imaging (MPI) scans evaluated by conventional anger-type single-photon emission computed tomography (conventional SPECT) versus cadmium-zinc-telluride SPECT (CZT SPECT) imaging in normal databases for different ethnic groups. MPI scans from 81 consecutive Japanese patients were examined using conventional SPECT and CZT SPECT and analyzed with the pre-installed quantitative perfusion SPECT (QPS) software. We compared the summed stress score (SSS), summed rest score (SRS), and summed difference score (SDS) for the two SPECT devices. For a normal MPI reference, we usually use Japanese databases for MPI created by the Japanese Society of Nuclear Medicine, which can be used with conventional SPECT but not with CZT SPECT. In this study, we used new Japanese normal databases constructed in our institution to compare conventional and CZT SPECT. Compared with conventional SPECT, CZT SPECT showed lower SSS (p < 0.001), SRS (p = 0.001), and SDS (p = 0.189) using the pre-installed SPECT database. In contrast, CZT SPECT showed no significant difference from conventional SPECT in QPS analysis using the normal databases from our institution. Myocardial perfusion analyses by CZT SPECT should be evaluated using normal databases based on the ethnic group being evaluated.
Object-oriented structures supporting remote sensing databases
NASA Technical Reports Server (NTRS)
Wichmann, Keith; Cromp, Robert F.
1995-01-01
Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.
A comprehensive clinical research database based on CDISC ODM and i2b2.
Meineke, Frank A; Stäubert, Sebastian; Löbe, Matthias; Winter, Alfred
2014-01-01
We present a working approach for a clinical research database as part of an archival information system. The CDISC ODM standard is target for clinical study and research relevant routine data, thus decoupling the data ingest process from the access layer. The presented research database is comprehensive as it covers annotating, mapping and curation of poorly annotated source data. Besides a conventional relational database the medical data warehouse i2b2 serves as main frontend for end-users. The system we developed is suitable to support patient recruitment, cohort identification and quality assurance in daily routine.
Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
2015-01-01
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.
Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
2015-01-01
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general. PMID:26038727
1994-01-01
databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation
Efficient hemodynamic event detection utilizing relational databases and wavelet analysis
NASA Technical Reports Server (NTRS)
Saeed, M.; Mark, R. G.
2001-01-01
Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.
Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido
2017-01-01
Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.
A high performance, ad-hoc, fuzzy query processing system for relational databases
NASA Technical Reports Server (NTRS)
Mansfield, William H., Jr.; Fleischman, Robert M.
1992-01-01
Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.
High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan
2010-10-04
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
New generic indexing technology
NASA Technical Reports Server (NTRS)
Freeston, Michael
1996-01-01
There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.
NASA Technical Reports Server (NTRS)
McMillin, Naomi; Allen, Jerry; Erickson, Gary; Campbell, Jim; Mann, Mike; Kubiatko, Paul; Yingling, David; Mason, Charlie
1999-01-01
The objective was to experimentally evaluate the longitudinal and lateral-directional stability and control characteristics of the Reference H configuration at supersonic and transonic speeds. A series of conventional and alternate control devices were also evaluated at supersonic and transonic speeds. A database on the conventional and alternate control devices was to be created for use in the HSR program.
High performance semantic factoring of giga-scale semantic graph databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Adolf, Bob; Haglin, David
2010-10-01
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less
Applying cognitive load theory to the redesign of a conventional database systems course
NASA Astrophysics Data System (ADS)
Mason, Raina; Seton, Carolyn; Cooper, Graham
2016-01-01
Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional structure for a database course, covering database design first, then database development. Analysis showed the conventional course content was appropriate but the instructional materials used were too complex, especially for novice students. The redesign of instructional materials applied CLT to remove split attention and redundancy effects, to provide suitable worked examples and sub-goals, and included an extensive re-sequencing of content. The approach was primarily directed towards mid- to lower performing students and results showed a significant improvement for this cohort with the exam failure rate reducing by 34% after the redesign on identical final exams. Student satisfaction also increased and feedback from subsequent study was very positive. The application of CLT to the design of instructional materials is discussed for delivery of technical courses.
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Marklin, Richard W; Saginus, Kyle A; Seeley, Patricia; Freier, Stephen H
2010-12-01
The primary purpose of this study was to determine whether conventional anthropometric databases of the U.S. general population are applicable to the population of U.S. electric utility field-workers. On the basis of anecdotal observations, field-workers for electric power utilities were thought to be generally taller and larger than the general population. However, there were no anthropometric data available on this population, and it was not known whether the conventional anthropometric databases could be used to design for this population. For this study, 3 standing and II sitting anthropometric measurements were taken from 187 male field-workers from three electric power utilities located in the upper Midwest of the United States and Southern California. The mean and percentile anthropometric data from field-workers were compared with seven well-known conventional anthropometric databases for North American males (United States, Canada, and Mexico). In general, the male field-workers were taller and heavier than the people in the reference databases for U.S. males. The field-workers were up to 2.3 cm taller and 10 kg to 18 kg heavier than the averages of the reference databases. This study was justified, as it showed that the conventional anthropometric databases of the general population underestimated the size of electric utility field-workers, particularly with respect to weight. When designing vehicles and tools for electric utility field-workers, designers and ergonomists should consider the population being designed for and the data from this study to maximize safety, minimize risk of injuries, and optimize performance.
Development and Operation of a Database Machine for Online Access and Update of a Large Database.
ERIC Educational Resources Information Center
Rush, James E.
1980-01-01
Reviews the development of a fault tolerant database processor system which replaced OCLC's conventional file system. A general introduction to database management systems and the operating environment is followed by a description of the hardware selection, software processes, and system characteristics. (SW)
HBVPathDB: a database of HBV infection-related molecular interaction network.
Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi
2005-03-21
To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.
ERIC Educational Resources Information Center
Costa, Joana M.; Miranda, Guilhermina L.
2017-01-01
This paper presents the results of a systematic review of the literature, including a meta-analysis, about the effectiveness of the use of Alice software in programming learning when compared to the use of a conventional programming language. Our research included studies published between the years 2000 and 2014 in the main databases. We gathered…
Avalos, Marta; Adroher, Nuria Duran; Lagarde, Emmanuel; Thiessard, Frantz; Grandvalet, Yves; Contrand, Benjamin; Orriols, Ludivine
2012-09-01
Large data sets with many variables provide particular challenges when constructing analytic models. Lasso-related methods provide a useful tool, although one that remains unfamiliar to most epidemiologists. We illustrate the application of lasso methods in an analysis of the impact of prescribed drugs on the risk of a road traffic crash, using a large French nationwide database (PLoS Med 2010;7:e1000366). In the original case-control study, the authors analyzed each exposure separately. We use the lasso method, which can simultaneously perform estimation and variable selection in a single model. We compare point estimates and confidence intervals using (1) a separate logistic regression model for each drug with a Bonferroni correction and (2) lasso shrinkage logistic regression analysis. Shrinkage regression had little effect on (bias corrected) point estimates, but led to less conservative results, noticeably for drugs with moderate levels of exposure. Carbamates, carboxamide derivative and fatty acid derivative antiepileptics, drugs used in opioid dependence, and mineral supplements of potassium showed stronger associations. Lasso is a relevant method in the analysis of databases with large number of exposures and can be recommended as an alternative to conventional strategies.
A framework for interval-valued information system
NASA Astrophysics Data System (ADS)
Yin, Yunfei; Gong, Guanghong; Han, Liang
2012-09-01
Interval-valued information system is used to transform the conventional dataset into the interval-valued form. To conduct the interval-valued data mining, we conduct two investigations: (1) construct the interval-valued information system, and (2) conduct the interval-valued knowledge discovery. In constructing the interval-valued information system, we first make the paired attributes in the database discovered, and then, make them stored in the neighbour locations in a common database and regard them as 'one' new field. In conducting the interval-valued knowledge discovery, we utilise some related priori knowledge and regard the priori knowledge as the control objectives; and design an approximate closed-loop control mining system. On the implemented experimental platform (prototype), we conduct the corresponding experiments and compare the proposed algorithms with several typical algorithms, such as the Apriori algorithm, the FP-growth algorithm and the CLOSE+ algorithm. The experimental results show that the interval-valued information system method is more effective than the conventional algorithms in discovering interval-valued patterns.
Incremental Query Rewriting with Resolution
NASA Astrophysics Data System (ADS)
Riazanov, Alexandre; Aragão, Marcelo A. T.
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
A Conventional Liner Acoustic/Drag Interaction Benchmark Database
NASA Technical Reports Server (NTRS)
Howerton, Brian M.; Jones, Michael G.
2017-01-01
The aerodynamic drag of acoustic liners has become a significant topic in the design of such for aircraft noise applications. In order to evaluate the benefits of concepts designed to reduce liner drag, it is necessary to establish the baseline performance of liners employing the typical design features of conventional configurations. This paper details a set of experiments in the NASA Langley Grazing Flow Impedance Tube to quantify the relative drag of a number of perforate-over-honeycomb liner configurations at flow speeds of M=0.3 and 0.5. These conventional liners are investigated to determine their resistance factors using a static pressure drop approach. Comparison of the resistance factors gives a relative measurement of liner drag. For these same flow conditions, acoustic measurements are performed with tonal excitation from 400 to 3000 Hz at source sound pressure levels of 140 and 150 dB. Educed impedance and attenuation spectra are used to determine the interaction between acoustic performance and drag.
TOPSAN: a dynamic web database for structural genomics.
Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John
2011-01-01
The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.
Diagnostic Assessment of Troubleshooting Skill in an Intelligent Tutoring System
1994-03-01
the information that can be provided from studying gauges and indicators and conventional test equipment procedures. Experts are particularly adept at...uses the results of the strategy and action evaluator to update the student profile, represented as a network, using the ERGO ( Noetic Systems, 1993...1990). Individualized tutoring using an intelligent fuzzy temporal relational database. International Tournal of Man-Machine Studies . & 409-429. . 34
LAND-deFeND - An innovative database structure for landslides and floods and their consequences.
Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto
2018-02-01
Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods Directives. The LAND-deFeND database structure is open, and freely available from http://geomorphology.irpi.cnr.it/tools. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
Puerarin injection for treatment of unstable angina pectoris: a meta-analysis and systematic review
Gao, Zhisheng; Wei, Baozhu; Qian, Cheng
2015-01-01
Background: Puerarin is an effective ingredient isolated from Radix Puerariae, a leguminous plant. In China, a large number of early studies suggest that puerarin may be used in the treatment of coronary heart disease. In recent years, puerarin injection has been widely used to treat coronary heart disease and angina pectoris. Objective: To systematically evaluate the clinical efficacy and safety of puerarin injection in the treatment of unstable angina pectoris (UAP). Methods: Data were retrieved from digital databases, including PubMed, Excerpt Medica Database (EMBASE), China Biology Medicine (CBM), the Cochrane Library, and Chinese databases. Results: Compared with patients who were treated with conventional Western medicines alone, the patients who were treated with conventional Western medicines in combination with puerarin injection exhibited significant improvements in the incidence of angina pectoris, electrocardiogram findings, nitroglycerin consumption and plasma endothelin levels. Conclusions: Strong evidence suggests that, the use of puerarin in combination with conventional Western medicines is a better treatment option for treating UAP, compared with the use of conventional Western medicines alone. PMID:26628941
LARCRIM user's guide, version 1.0
NASA Technical Reports Server (NTRS)
Davis, John S.; Heaphy, William J.
1993-01-01
LARCRIM is a relational database management system (RDBMS) which performs the conventional duties of an RDBMS with the added feature that it can store attributes which consist of arrays or matrices. This makes it particularly valuable for scientific data management. It is accessible as a stand-alone system and through an application program interface. The stand-alone system may be executed in two modes: menu or command. The menu mode prompts the user for the input required to create, update, and/or query the database. The command mode requires the direct input of LARCRIM commands. Although LARCRIM is an update of an old database family, its performance on modern computers is quite satisfactory. LARCRIM is written in FORTRAN 77 and runs under the UNIX operating system. Versions have been released for the following computers: SUN (3 & 4), Convex, IRIS, Hewlett-Packard, CRAY 2 & Y-MP.
High Performance Descriptive Semantic Analysis of Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less
Crowdsourcing-Assisted Radio Environment Database for V2V Communication.
Katagiri, Keita; Sato, Koya; Fujii, Takeo
2018-04-12
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication †
Katagiri, Keita; Fujii, Takeo
2018-01-01
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174
Batista Rodríguez, Gabriela; Balla, Andrea; Fernández-Ananín, Sonia; Balagué, Carmen; Targarona, Eduard M
2018-05-01
The term big data refers to databases that include large amounts of information used in various areas of knowledge. Currently, there are large databases that allow the evaluation of postoperative evolution, such as the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP), the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS), and the National Cancer Database (NCDB). The aim of this review was to evaluate the clinical impact of information obtained from these registries regarding gastroesophageal surgery. A systematic review using the Meta-analysis of Observational Studies in Epidemiology guidelines was performed. The research was carried out using the PubMed database identifying 251 articles. All outcomes related to gastroesophageal surgery were analyzed. A total of 34 articles published between January 2007 and July 2017 were included, for a total of 345 697 patients. Studies were analyzed and divided according to the type of surgery and main theme in (1) esophageal surgery and (2) gastric surgery. The information provided by these databases is an effective way to obtain levels of evidence not obtainable by conventional methods. Furthermore, this information is useful for the external validation of previous studies, to establish benchmarks that allow comparisons between centers and have a positive impact on the quality of care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, K.C.; Chiu, S.Y.; Ditmars, J.D.
1994-05-01
The MIDAS (Munition Items Disposition Action System) database system is an electronic data management system capable of storage and retrieval of information on the detailed structures and material compositions of munitions items designated for demilitarization. The types of such munitions range from bulk propellants and small arms to projectiles and cluster bombs. The database system is also capable of processing data on the quantities of inert, PEP (propellant, explosives and pyrotechnics) and packaging materials associated with munitions, components, or parts, and the quantities of chemical compounds associated with parts made of PEP materials. Development of the MIDAS database system hasmore » been undertaken by the US Army to support disposition of unwanted ammunition stockpiles. The inventory of such stockpiles currently includes several thousand items, which total tens of thousands of tons, and is still growing. Providing systematic procedures for disposing of all unwanted conventional munitions is the mission of the MIDAS Demilitarization Program. To carry out this mission, all munitions listed in the Single Manager for Conventional Ammunition inventory must be characterized, and alternatives for resource recovery and recycling and/or disposal of munitions in the demilitarization inventory must be identified.« less
Hierarchical Data Distribution Scheme for Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Bhushan, Shashi; Dave, M.; Patel, R. B.
2010-11-01
In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.
High-throughput STR analysis for DNA database using direct PCR.
Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan
2013-07-01
Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.
Jiang, Yuebo; Shi, Xian; Tang, Yan
2015-01-01
Acupuncture is one of the important parts of therapeutic methods in traditional Chinese medicine, and has been widely used for the treatment of nerve deafness in recent years. The current study was to evaluate the efficacy and safety of acupuncture therapy for nerve deafness compared with conventional medicine therapy. PubMed, the Chinese National Knowledge Infrastructure Database, the Chinese Science and Technology Periodical Database, the Chinese Biomedical Database, the Wanfang Database were searched for articles published to identify randomized controlled trials evaluating efficacy and side effects between acupuncture and conventional medicine therapies up to 2013/06. A total of 12 studies, including 527 patients assessed the efficacy and safety of acupuncture therapy for nerve deafness. Overall, the efficacy of acupuncture was significantly better than that of the conventional western medication (RR: 1.54, 95% CI: 1.36-1.74) or traditional Chinese medicines (RR: 1.51, 95% CI: 1.24-1.84), and the efficacy of acupuncture in combination with conventional western medication or traditional Chinese medicine was better than that of the conventional western medication alone (RR: 1.51, 95% CI: 1.29-1.77) or traditional Chinese medicine alone (RR: 1.59, 95% CI: 1.30-1.95). Based on the comparison of number of deafness patients who were completely cured, the efficacy of acupuncture in combination with traditional Chinese medicines was better than that of traditional Chinese medicine alone (RR: 4.62, 95% CI: 1.38-15.47). Acupuncture therapy can significantly improve the hearing of patients with nerve deafness, and the efficacy of acupuncture in combination with medication is superior to medication alone.
[Effect of 3D printing technology on pelvic fractures:a Meta-analysis].
Zhang, Yu-Dong; Wu, Ren-Yuan; Xie, Ding-Ding; Zhang, Lei; He, Yi; Zhang, Hong
2018-05-25
To evaluate the effect of 3D printing technology applied in the surgical treatment of pelvic fractures through the published literatures by Meta-analysis. The PubMed database, EMCC database, CBM database, CNKI database, VIP database and Wanfang database were searched from the date of database foundation to August 2017 to collect the controlled clinical trials in wich 3D printing technology was applied in preoperative planning of pelvic fracture surgery. The retrieved literatures were screened according to predefined inclusion and exclusion criteria, and quality evaluation were performed. Then, the available data were extracted and analyzed with the RevMan5.3 software. Totally 9 controlled clinical trials including 638 cases were chosen. Among them, 279 cases were assigned to the 3D printing technology group and 359 cases to the conventional group. The Meta-analysis results showed that the operative time[SMD=-2.81, 95%CI(-3.76, -1.85)], intraoperative blood loss[SMD=-3.28, 95%CI(-4.72, -1.85)] and the rate of complication [OR=0.47, 95%CI(0.25, 0.87)] in the 3D printing technology were all lower than those in the conventional group;the excellent and good rate of pelvic fracture reduction[OR=2.09, 95%CI(1.32, 3.30)] and postoperative pelvic functional restoration [OR=1.94, 95%CI(1.15, 3.28) in the 3D printing technology were all superior to those in the conventional group. 3D printing technology applied in the surgical treatment of pelvic fractures has the advantage of shorter operative time, less intraoperative blood loss and lower rate of complication, and can improve the quality of pelvic fracture reduction and the recovery of postoperative pelvic function. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.
Towards the ophthalmology patentome: a comprehensive patent database of ocular drugs and biomarkers.
Mucke, Hermann A M; Mucke, Eva; Mucke, Peter M
2013-01-01
We are currently building a database of all patent documents that contain substantial information related to pharmacology, drug delivery, tissue technology, and molecular diagnostics in ophthalmology. The goal is to establish a 'patentome', a body of cleaned and annotated data where all text-based, chemistry and pharmacology information can be accessed and mined in its context. We provide metrics on patent convention treaty documents, which demonstrate that ocular-related patenting has shown stronger growth than general patent cooperation treaty patenting during the past 25 years, and, while the majority of applications of this type have always provided substantial biological data, both data support and objections by patent examiners have been increasing since 2006-2007. Separately, we present a case study of chemistry information extraction from patents published during the 1950s and 1970s, which reveal compounds with corneal anesthesia potential that were never published in the peer-reviewed literature.
Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course
ERIC Educational Resources Information Center
Mason, Raina; Seton, Carolyn; Cooper, Graham
2016-01-01
Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional…
Supporting user-defined granularities in a spatiotemporal conceptual model
Khatri, V.; Ram, S.; Snodgrass, R.T.; O'Brien, G. M.
2002-01-01
Granularities are integral to spatial and temporal data. A large number of applications require storage of facts along with their temporal and spatial context, which needs to be expressed in terms of appropriate granularities. For many real-world applications, a single granularity in the database is insufficient. In order to support any type of spatial or temporal reasoning, the semantics related to granularities needs to be embedded in the database. Specifying granularities related to facts is an important part of conceptual database design because under-specifying the granularity can restrict an application, affect the relative ordering of events and impact the topological relationships. Closely related to granularities is indeterminacy, i.e., an occurrence time or location associated with a fact that is not known exactly. In this paper, we present an ontology for spatial granularities that is a natural analog of temporal granularities. We propose an upward-compatible, annotation-based spatiotemporal conceptual model that can comprehensively capture the semantics related to spatial and temporal granularities, and indeterminacy without requiring new spatiotemporal constructs. We specify the formal semantics of this spatiotemporal conceptual model via translation to a conventional conceptual model. To underscore the practical focus of our approach, we describe an on-going case study. We apply our approach to a hydrogeologic application at the United States Geologic Survey and demonstrate that our proposed granularity-based spatiotemporal conceptual model is straightforward to use and is comprehensive.
Geodemographic segmentation systems for screening health data.
Openshaw, S; Blake, M
1995-01-01
AIM--To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. DESIGN AND SETTING--This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. RESULTS AND CONCLUSION--Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. Images PMID:8594132
NASA Astrophysics Data System (ADS)
Piasecki, M.; Beran, B.
2007-12-01
Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.
Franklin, Stanley S; Thijs, Lutgarde; Hansen, Tine W; Li, Yan; Boggia, José; Kikuya, Masahiro; Björklund-Bodegård, Kristina; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Kuznetsova, Tatiana; Stolarz-Skrzypek, Katarzyna; Tikhonoff, Valérie; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Sandoya, Edgardo; Kawecka-Jaszcz, Kalina; Imai, Yutaka; Wang, Jiguang; Ibsen, Hans; O'Brien, Eoin; Staessen, Jan A
2012-03-01
The significance of white-coat hypertension in older persons with isolated systolic hypertension remains poorly understood. We analyzed subjects from the population-based 11-country International Database on Ambulatory Blood Pressure Monitoring in Relation to Cardiovascular Outcomes database who had daytime ambulatory blood pressure (BP; ABP) and conventional BP (CBP) measurements. After excluding persons with diastolic hypertension by CBP (≥90 mm Hg) or by daytime ABP (≥85 mm Hg), a history of cardiovascular disease, and persons <18 years of age, the present analysis totaled 7295 persons, of whom 1593 had isolated systolic hypertension. During a median follow-up of 10.6 years, there was a total of 655 fatal and nonfatal cardiovascular events. The analyses were stratified by treatment status. In untreated subjects, those with white-coat hypertension (CBP ≥140/<90 mm Hg and ABP <135/<85 mm Hg) and subjects with normal BP (CBP <140/<90 mm Hg and ABP <135/<85 mm Hg) were at similar risk (adjusted hazard rate: 1.17 [95% CI: 0.87-1.57]; P=0.29). Furthermore, in treated subjects with isolated systolic hypertension, the cardiovascular risk was similar in elevated conventional and normal daytime systolic BP as compared with those with normal conventional and normal daytime BPs (adjusted hazard rate: 1.10 [95% CI: 0.79-1.53]; P=0.57). However, both treated isolated systolic hypertension subjects with white-coat hypertension (adjusted hazard rate: 2.00; [95% CI: 1.43-2.79]; P<0.0001) and treated subjects with normal BP (adjusted hazard rate: 1.98 [95% CI: 1.49-2.62]; P<0.0001) were at higher risk as compared with untreated normotensive subjects. In conclusion, subjects with sustained hypertension who have their ABP normalized on antihypertensive therapy but with residual white-coat effect by CBP measurement have an entity that we have termed, "treated normalized hypertension." Therefore, one should be cautious in applying the term "white-coat hypertension" to persons receiving antihypertensive treatment.
Unbiased quantitative testing of conventional orthodontic beliefs.
Baumrind, S
1998-03-01
This study used a preexisting database to test in hypothesis from the appropriateness of some common orthodontic beliefs concerning upper first molar displacement and changes in facial morphology associated with conventional full bonded/banded treatment in growing subjects. In an initial pass, the author used data from a stratified random sample of 48 subjects drawn retrospectively from the practice of a single, experienced orthodontist. This sample consisted of 4 subgroups of 12 subjects each: Class I nonextraction, Class I extraction, Class II nonextraction, and Class II extraction. The findings indicate that, relative to the facial profile, chin point did not, on average, displace anteriorly during treatment, either overall or in any subgroup. Relative to the facial profile, Point A became significantly less prominent during treatment, both overall and in each subgroup. The best estimate of the mean displacement of the upper molar cusp relative to superimposition on Anterior Cranial Base was in the mesial direction in each of the four subgroups. In only one extraction subject out of 24 did the cusp appear to be displaced distally. Mesial molar cusp displacement was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. Relative to superimposition on anatomical "best fit" of maxillary structures, the findings for molar cusp displacement were similar, but even more dramatic. Mean mesial migration was highly significant in both the Class II nonextraction and Class II extraction subgroups. In no subject in the entire sample was distal displacement noted relative to this superimposition. Mean increase in anterior Total Face Height was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. (This finding was contrary to the author's original expectation.) The generalizability of the findings from the initial pass to other treated growing subjects was then assessed by retesting modified hypotheses against a second database stored sample that earlier had been drawn randomly from two other orthodontic practices. The implications of the author's study strategy to the design of future shared digital databases is discussed briefly.
Code of Federal Regulations, 2010 CFR
2010-01-01
... database resulting from the transformation of the ENC by ECDIS for appropriate use, updates to the ENC by... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as to content, structure, and format, issued for use with ECDIS on the authority of government...
Mandatory and Location-Aware Access Control for Relational Databases
NASA Astrophysics Data System (ADS)
Decker, Michael
Access control is concerned with determining which operations a particular user is allowed to perform on a particular electronic resource. For example, an access control decision could say that user Alice is allowed to perform the operation read (but not write) on the resource research report. With conventional access control this decision is based on the user's identity whereas the basic idea of Location-Aware Access Control (LAAC) is to evaluate also a user's current location when making the decision if a particular request should be granted or denied. LAAC is an interesting approach for mobile information systems because these systems are exposed to specific security threads like the loss of a device. Some data models for LAAC can be found in literature, but almost all of them are based on RBAC and none of them is designed especially for Database Management Systems (DBMS). In this paper we therefore propose a LAAC-approach for DMBS and describe a prototypical implementation of that approach that is based on database triggers.
Yoo, Danny; Xu, Iris; Berardini, Tanya Z; Rhee, Seung Yon; Narayanasamy, Vijay; Twigger, Simon
2006-03-01
For most systems in biology, a large body of literature exists that describes the complexity of the system based on experimental results. Manual review of this literature to extract targeted information into biological databases is difficult and time consuming. To address this problem, we developed PubSearch and PubFetch, which store literature, keyword, and gene information in a relational database, index the literature with keywords and gene names, and provide a Web user interface for annotating the genes from experimental data found in the associated literature. A set of protocols is provided in this unit for installing, populating, running, and using PubSearch and PubFetch. In addition, we provide support protocols for performing controlled vocabulary annotations. Intended users of PubSearch and PubFetch are database curators and biology researchers interested in tracking the literature and capturing information about genes of interest in a more effective way than with conventional spreadsheets and lab notebooks.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry
2003-05-01
This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.
Mercury and halogens in coal: Chapter 2
Kolker, Allan; Quick, Jeffrey C.; Granite, Evan J.; Pennline, Henry W.; Senior, Constance L.
2014-01-01
Apart from mercury itself, coal rank and halogen content are among the most important factors inherent in coal that determine the proportion of mercury captured by conventional controls during coal combustion. This chapter reviews how mercury in coal occurs, gives available concentration data for mercury in U.S. and international commercial coals, and provides an overview of the natural variation in halogens that influence mercury capture. Three databases, the U.S. Geological Survey coal quality (USGS COALQUAL) database for in-ground coals, and the 1999 and 2010 U.S. Environmental Protection Agency (EPA) Information Collection Request (ICR) databases for coals delivered to power stations, provide extensive results for mercury and other parameters that are compared in this chapter. In addition to the United States, detailed characterization of mercury is available on a nationwide basis for China, whose mean values in recent compilations are very similar to the United States in-ground mean of 0.17 ppm mercury. Available data for the next five largest producers (India, Australia, South Africa, the Russian Federation, and Indonesia) are more limited and with the possible exceptions of Australia and the Russian Federation, do not allow nationwide means for mercury in coal to be calculated. Chlorine in coal varies as a function of rank and correspondingly, depth of burial. As discussed elsewhere in this volume, on a proportional basis, bromine is more effective than chlorine in promoting mercury oxidation in flue gas and capture by conventional controls. The ratio of bromine to chlorine in coal is indicative of the proportion of halogens present in formation waters within a coal basin. This ratio is relatively constant except in coals that have interacted with deep-basin brines that have reached halite saturation, enriching residual fluids in bromine. Results presented here help optimize mercury capture by conventional controls and provide a starting point for implementation of mercury-specific controls discussed elsewhere in this volume.
SAM: String-based sequence search algorithm for mitochondrial DNA database queries
Röck, Alexander; Irwin, Jodi; Dür, Arne; Parsons, Thomas; Parson, Walther
2011-01-01
The analysis of the haploid mitochondrial (mt) genome has numerous applications in forensic and population genetics, as well as in disease studies. Although mtDNA haplotypes are usually determined by sequencing, they are rarely reported as a nucleotide string. Traditionally they are presented in a difference-coded position-based format relative to the corrected version of the first sequenced mtDNA. This convention requires recommendations for standardized sequence alignment that is known to vary between scientific disciplines, even between laboratories. As a consequence, database searches that are vital for the interpretation of mtDNA data can suffer from biased results when query and database haplotypes are annotated differently. In the forensic context that would usually lead to underestimation of the absolute and relative frequencies. To address this issue we introduce SAM, a string-based search algorithm that converts query and database sequences to position-free nucleotide strings and thus eliminates the possibility that identical sequences will be missed in a database query. The mere application of a BLAST algorithm would not be a sufficient remedy as it uses a heuristic approach and does not address properties specific to mtDNA, such as phylogenetically stable but also rapidly evolving insertion and deletion events. The software presented here provides additional flexibility to incorporate phylogenetic data, site-specific mutation rates, and other biologically relevant information that would refine the interpretation of mitochondrial DNA data. The manuscript is accompanied by freeware and example data sets that can be used to evaluate the new software (http://stringvalidation.org). PMID:21056022
Hirano, Yoko; Asami, Yuko; Kuribayashi, Kazuhiko; Kitazaki, Shigeru; Yamamoto, Yuji; Fujimoto, Yoko
2018-05-01
Many pharmacoepidemiologic studies using large-scale databases have recently been utilized to evaluate the safety and effectiveness of drugs in Western countries. In Japan, however, conventional methodology has been applied to postmarketing surveillance (PMS) to collect safety and effectiveness information on new drugs to meet regulatory requirements. Conventional PMS entails enormous costs and resources despite being an uncontrolled observational study method. This study is aimed at examining the possibility of database research as a more efficient pharmacovigilance approach by comparing a health care claims database and PMS with regard to the characteristics and safety profiles of sertraline-prescribed patients. The characteristics of sertraline-prescribed patients recorded in a large-scale Japanese health insurance claims database developed by MinaCare Co. Ltd. were scanned and compared with the PMS results. We also explored the possibility of detecting signals indicative of adverse reactions based on the claims database by using sequence symmetry analysis. Diabetes mellitus, hyperlipidemia, and hyperthyroidism served as exploratory events, and their detection criteria for the claims database were reported by the Pharmaceuticals and Medical Devices Agency in Japan. Most of the characteristics of sertraline-prescribed patients in the claims database did not differ markedly from those in the PMS. There was no tendency for higher risks of the exploratory events after exposure to sertraline, and this was consistent with sertraline's known safety profile. Our results support the concept of using database research as a cost-effective pharmacovigilance tool that is free of selection bias . Further investigation using database research is required to confirm our preliminary observations. Copyright © 2018. Published by Elsevier Inc.
SSME environment database development
NASA Technical Reports Server (NTRS)
Reardon, John
1987-01-01
The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.
New standards for reducing gravity data: The North American gravity database
Hinze, W. J.; Aiken, C.; Brozena, J.; Coakley, B.; Dater, D.; Flanagan, G.; Forsberg, R.; Hildenbrand, T.; Keller, Gordon R.; Kellogg, J.; Kucks, R.; Li, X.; Mainville, A.; Morin, R.; Pilkington, M.; Plouff, D.; Ravat, D.; Roman, D.; Urrutia-Fucugauchi, J.; Veronneau, M.; Webring, M.; Winester, D.
2005-01-01
The North American gravity database as well as databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revising procedures for calculating gravity anomalies, taking into account our enhanced computational power, improved terrain databases and datums, and increased interest in more accurately defining long-wavelength anomaly components. Users of the databases may note minor differences between previous and revised database values as a result of these procedures. Generally, the differences do not impact the interpretation of local anomalies but do improve regional anomaly studies. The most striking revision is the use of the internationally accepted terrestrial ellipsoid for the height datum of gravity stations rather than the conventionally used geoid or sea level. Principal facts of gravity observations and anomalies based on both revised and previous procedures together with germane metadata will be available on an interactive Web-based data system as well as from national agencies and data centers. The use of the revised procedures is encouraged for gravity data reduction because of the widespread use of the global positioning system in gravity fieldwork and the need for increased accuracy and precision of anomalies and consistency with North American and national databases. Anomalies based on the revised standards should be preceded by the adjective "ellipsoidal" to differentiate anomalies calculated using heights with respect to the ellipsoid from those based on conventional elevations referenced to the geoid. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
Hoe, Victor C W; Urquhart, Donna M; Kelsall, Helen L; Sim, Malcolm R
2012-08-15
Work-related upper limb and neck musculoskeletal disorders (MSDs) are one of the most common occupational disorders around the world. Although ergonomic design and training are likely to reduce the risk of workers developing work-related upper limb and neck MSDs, the evidence is unclear. To assess the effects of workplace ergonomic design or training interventions, or both, for the prevention of work-related upper limb and neck MSDs in adults. We searched MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), CINAHL, AMED, Web of Science (Science Citation Index), SPORTDiscus, Cochrane Occupational Safety and Health Review Group Database and Cochrane Bone, Joint and Muscle Trauma Group Specialised Register to July 2010, and Physiotherapy Evidence Database, US Centers for Disease Control and Prevention, the National Institute for Occupational Safety and Health database, and International Occupational Safety and Health Information Centre database to November 2010. We included randomised controlled trials (RCTs) of ergonomic workplace interventions for preventing work-related upper limb and neck MSDs. We included only studies with a baseline prevalence of MSDs of the upper limb or neck, or both, of less than 25%. Two review authors independently extracted data and assessed risk of bias. We included studies with relevant data that we judged to be sufficiently homogeneous regarding the intervention and outcome in the meta-analysis. We assessed the overall quality of the evidence for each comparison using the GRADE approach. We included 13 RCTs (2397 workers). Eleven studies were conducted in an office environment and two in a healthcare setting. We judged one study to have a low risk of bias. The 13 studies evaluated effectiveness of ergonomic equipment, supplementary breaks or reduced work hours, ergonomic training, a combination of ergonomic training and equipment, and patient lifting interventions for preventing work-related MSDs of the upper limb and neck in adults.Overall, there was moderate-quality evidence that arm support with alternative mouse reduced the incidence of neck/shoulder disorders (risk ratio (RR) 0.52; 95% confidence interval (CI) 0.27 to 0.99) but not the incidence of right upper limb MSDs (RR 0.73; 95% CI 0.32 to 1.66); and low-quality evidence that this intervention reduced neck/shoulder discomfort (standardised mean difference (SMD) -0.41; 95% CI -0.69 to -0.12) and right upper limb discomfort (SMD -0.34; 95% CI -0.63 to -0.06).There was also moderate-quality evidence that the incidence of neck/shoulder and right upper limb disorders were not reduced when comparing alternative mouse and conventional mouse (neck/shoulder RR 0.62; 95% CI 0.19 to 2.00; right upper limb RR 0.91; 95% CI 0.48 to 1.72), arm support and no arm support with conventional mouse (neck/shoulder RR 0.67; 95% CI 0.36 to 1.24; right upper limb RR 1.09; 95% CI 0.51 to 2.29), and alternative mouse with arm support and conventional mouse with arm support (neck/shoulder RR 0.58; 95% CI 0.30 to 1.12; right upper limb RR 0.92; 95% CI 0.36 to 2.36).There was low-quality evidence that using an alternative mouse with arm support compared to conventional mouse with arm support reduced neck/shoulder discomfort (SMD -0.39; 95% CI -0.67 to -0.10). There was low- to very low-quality evidence that other interventions were not effective in reducing work-related upper limb and neck MSDs in adults. We found moderate-quality evidence to suggest that the use of arm support with alternative mouse may reduce the incidence of neck/shoulder MSDs, but not right upper limb MSDs. Moreover, we found moderate-quality evidence to suggest that the incidence of neck/shoulder and right upper limb MSDs is not reduced when comparing alternative and conventional mouse with and without arm support. However, given there were multiple comparisons made involving a number of interventions and outcomes, high-quality evidence is needed to determine the effectiveness of these interventions clearly. While we found very-low- to low-quality evidence to suggest that other ergonomic interventions do not prevent work-related MSDs of the upper limb and neck, this was limited by the paucity and heterogeneity of available studies. This review highlights the need for high-quality RCTs examining the prevention of MSDs of the upper limb and neck.
Longoni, Juliano N; Lopes, Beatriz M; Freires, Irlan A; Dutra, Kamile L; Franco, Ademir; Paranhos, Luiz R
2017-01-01
The present study aimed to review the literature systematically and assess comparatively whether self-ligating metallic brackets accumulate less Streptococcus mutans biofilm than conventional metallic brackets. The systematic search was performed following PRISMA guidelines and registration in PROSPERO. Seven electronic databases (Google Scholar, LILACS, Open Grey, PubMed, SciELO, ScienceDirect, and Scopus) were consulted until April 2016, with no restriction of language and time of publication. Only randomized clinical studies verifying S. mutans colonization in metallic brackets (self-ligating and conventional) were included. All steps were performed independently by two operators. The search resulted in 546 records obtained from the electronic databases. Additionally, 216 references obtained from the manual search of eligible articles were assessed. Finally, a total of 5 studies were included in the qualitative synthesis. In 1 study, the total bacterial count was not different among self-ligating and conventional brackets, whereas in 2 studies the amount was lower for self-ligating brackets. Regarding the specific count of S. mutans , 2 studies showed less accumulation in self-ligating than in conventional brackets. Based on the limited evidence, self-ligating metallic brackets accumulate less S. mutans than conventional ones. However, these findings must be interpreted in conjunction with particularities individual for each patient - such as hygiene and dietary habits, which are components of the multifactorial environment that enables S. Mutans to proliferate and keep retained in the oral cavity.
Garg, Pankaj; Thakur, Jai Deep; Garg, Mahak; Menon, Geetha R
2012-08-01
We analyzed different morbidity parameters between single-incision laparoscopic cholecystectomy (SILC) and conventional laparoscopic cholecystectomy (CLC). Pubmed, Ovid, Embase, SCI database, Cochrane, and Google Scholar were searched. The primary endpoints analyzed were cosmetic result and the postoperative pain (at 6 and 24 h) and the secondary endpoints were operating time, hospital stay, incidence of overall postoperative complications, wound-related complications, and port-site hernia. Six hundred fifty-nine patients (SILC-349, CLC-310) were analyzed from nine randomized controlled trials. The objective postoperative pain scores at 6 and 24 h and the hospital stay were similar in both groups. The total postoperative complications, wound-related problems, and port-site hernia formation, though higher in SILC, were also comparable in both groups. SILC had significantly favorable cosmetic scoring compared to CLC [weighted mean difference = 1.0, p = 0.0001]. The operating time was significantly longer in SILC [weighted mean difference = 15.63, p = 0.0001]. Single-incision laparoscopic cholecystectomy does not confer any benefit in postoperative pain (6 and 24 h) and hospital stay as compared to conventional laparoscopic cholecystectomy while having significantly better cosmetic results at the same time. Postoperative complications, though higher in SILC, were statistically similar in both the groups.
Ying, Xiao-Ming; Jiang, Yong-Liang; Xu, Peng; Wang, Peng; Zhu, Bo; Guo, Shao-Qing
2016-08-25
To conduct a meta analysis of studies comparing theapeutic effect and safety of microendoscopic discectomy to conventional open discectomy in the treatment of lumbar disc herniation in China. A systematic literature retrieval was conducted in the Chinese Bio medicine Database, CNKI database, Chongqin VIP database and Wangfang database. The statistical analysis was performed using a RevMan 4.2 software. The comparison included excellent rate, operation times, blood loss, periods of bed rest and resuming daily activities, hospital stay or hospital stay after surgery, and complications of microendoscopic discectomy versus conventional open discectomy. The search yielded 20 reports, which included 2 957 cases treated by microendoscopic discectomy and 2 130 cases treated by conventional open discectomy. There were 12, 11, 7, 5, 4 and 4 reports which had comparison of operation times, blood loss, period of bed rest, periods of resuming daily activities, hospital stay and hospital stay after surgery respectively. Complications were mentioned in 10 reports. Compared to patients treated by open discectomy, patients treated by microendoscopic discectomy had a higher excellent rates [OR=1.29, 95%CI (1.03, 1.62)], less blood loss[OR=-63.67, 95%CI (-86.78, -40.55)], less period of bed rest[OR=-15.33, 95%CI (-17.76, -12.90)], less period of resumption of daily activities [OR=-24.41, 95%CI (-36.86, -11.96)], less hospital stay [OR=-5.00, 95%CI (-6.94, -3.06)] or hospital stay after surgery [OR=-7.47, 95%CI (-9.17, -5.77) respectively. However, incidence of complications and operation times were proved no significant different between microendoscopic discectomy and open discectomy. Microendoscopic discectomy and conventional open discectomy in treatment of lumbar disc herniation are both safe, effective; incidence of complications are nearly. Patients with lumbar disc herniation treated by microendoscopic discectomy have fewer blood loss, shorter periods of bed rest and hospital stay, and resume daily activities faster. Techniques are selected according to indications, microendoscopic discectomy should be carried out when conjunct indications occur.
Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.
ERIC Educational Resources Information Center
Johnson, Ian
TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…
ERIC Educational Resources Information Center
Marcus, Nicole; Adger, Carolyn Temple; Arteagoitia, Igone
2007-01-01
This report seeks to alert administrators, school staff, and database managers to variations in the naming systems of other cultures; to help these groups accommodate other cultures and identify students consistently in school databases; and to provide knowledge of other cultures' naming conventions and forms of address to assist in interacting…
ERIC Educational Resources Information Center
Darrah, Brenda
Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…
ERIC Educational Resources Information Center
Asher, Andrew D.; Duke, Lynda M.; Wilson, Suzanne
2013-01-01
In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar, and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data were gathered on students' usage of these tools. Regardless of the…
Architecture for biomedical multimedia information delivery on the World Wide Web
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
1997-10-01
Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.
Chinese Herbal Medicine for Symptom Management in Cancer Palliative Care
Chung, Vincent C.H.; Wu, Xinyin; Lu, Ping; Hui, Edwin P.; Zhang, Yan; Zhang, Anthony L.; Lau, Alexander Y.L.; Zhao, Junkai; Fan, Min; Ziea, Eric T.C.; Ng, Bacon F.L.; Wong, Samuel Y.S.; Wu, Justin C.Y.
2016-01-01
Abstract Use of Chinese herbal medicines (CHM) in symptom management for cancer palliative care is very common in Chinese populations but clinical evidence on their effectiveness is yet to be synthesized. To conduct a systematic review with meta-analysis to summarize results from CHM randomized controlled trials (RCTs) focusing on symptoms that are undertreated in conventional cancer palliative care. Five international and 3 Chinese databases were searched. RCTs evaluating CHM, either in combination with conventional treatments or used alone, in managing cancer-related symptoms were considered eligible. Effectiveness was quantified by using weighted mean difference (WMD) using random effect model meta-analysis. Fourteen RCTs were included. Compared with conventional intervention alone, meta-analysis showed that combined CHM and conventional treatment significantly reduced pain (3 studies, pooled WMD: −0.90, 95% CI: −1.69 to −0.11). Six trials comparing CHM with conventional medications demonstrated similar effect in reducing constipation. One RCT showed significant positive effect of CHM plus chemotherapy for managing fatigue, but not in the remaining 3 RCTs. The additional use of CHM to chemotherapy does not improve anorexia when compared to chemotherapy alone, but the result was concluded from 2 small trials only. Adverse events were infrequent and mild. CHM may be considered as an add-on to conventional care in the management of pain in cancer patients. CHM could also be considered as an alternative to conventional care for reducing constipation. Evidence on the use of CHM for treating anorexia and fatigue in cancer patients is uncertain, warranting further research. PMID:26886628
Utah Virtual Lab: JAVA interactivity for teaching science and statistics on line.
Malloy, T E; Jensen, G C
2001-05-01
The Utah on-line Virtual Lab is a JAVA program run dynamically off a database. It is embedded in StatCenter (www.psych.utah.edu/learn/statsampler.html), an on-line collection of tools and text for teaching and learning statistics. Instructors author a statistical virtual reality that simulates theories and data in a specific research focus area by defining independent, predictor, and dependent variables and the relations among them. Students work in an on-line virtual environment to discover the principles of this simulated reality: They go to a library, read theoretical overviews and scientific puzzles, and then go to a lab, design a study, collect and analyze data, and write a report. Each student's design and data analysis decisions are computer-graded and recorded in a database; the written research report can be read by the instructor or by other students in peer groups simulating scientific conventions.
Voice recognition products-an occupational risk for users with ULDs?
Williams, N R
2003-10-01
Voice recognition systems (VRS) allow speech to be converted both directly into text-which appears on the screen of a computer-and to direct equipment to perform specific functions. Suggested applications are many and varied, including increasing efficiency in the reporting of radiographs, allowing directed surgery and enabling individuals with upper limb disorders (ULDs) who cannot use other input devices, such as keyboards and mice, to carry out word processing and other activities. Aim This paper describes four cases of vocal dysfunction related to the use of such software, which have been identified from the database of the Voice and Speech Laboratory of the Massachusetts Eye and Ear infirmary (MEEI). The database was searched using key words 'voice recognition' and four cases were identified from a total of 4800. In all cases, the VRS was supplied to assist individuals with ULDs who could not use conventional input devices. Case reports illustrate time of onset and symptoms experienced. The cases illustrate the need for risk assessment and consideration of the ergonomic aspects of voice use prior to such adaptations being used, particularly in those who already experience work-related ULDs.
Korucu, M Kemal; Gedik, Kadir; Weber, Roland; Karademir, Aykan; Kurt-Karakus, Perihan Binnur
2015-10-01
Perfluorooctane sulfonic acid (PFOS) and related substances have been listed as persistent organic pollutants (POPs) in the Stockholm Convention. Countries which have ratified the Convention need to take appropriate actions to control PFOS use and release. This study compiles and enhances the findings of the first inventory of PFOS and related substances use in Turkey conducted within the frame of the Stockholm Convention National Implementation Plan (NIP) update. The specific Harmonized Commodity Description and Coding System (Harmonized System (HS)) codes of imported and exported goods that possibly contain PFOS and 165 of Chemical Abstracts Service (CAS) numbers of PFOS-related substances were assessed for acquiring information from customs and other authorities. However, with the current approaches available, no useful information could be compiled since HS codes are not specific enough and CAS numbers are not used by customs. Furthermore, the cut-off volume in chemical databases in Turkey and the reporting limit in the HS system (0.1 %) are too high for controlling PFOS. The attempt of modeling imported volumes by a Monte Carlo simulation did not also result in a satisfactory estimate, giving an upper-bound estimate above the global production volumes. The replies to questionnaires were not satisfactory, highlighting that an elaborated approach is needed in the communication with potentially PFOS-using stakeholders. The experience of the challenges of gathering information on PFOS in articles and products revealed the gaps of controlling highly hazardous substances in products and articles and the need of improvements.
Development of adolescents' peer crowd identification in relation to changes in problem behaviors.
Doornwaard, Suzan M; Branje, Susan; Meeus, Wim H J; ter Bogt, Tom F M
2012-09-01
This 5-wave longitudinal study, which included 1,313 Dutch adolescents, examined the development of peer crowd identification in relation to changes in problem behaviors. Adolescents from 2 age cohorts annually reported their identification with 7 peer crowds and their levels of internalizing and externalizing problem behaviors. Univariate latent growth curve analyses revealed declines (i.e., "Hip Hoppers" and "Metal Heads") or declines followed by stabilization (i.e., "Nonconformists") in identification with nonconventional crowds and increases (i.e., "Elites" and "Brains") or declines followed by stabilization (i.e., "Normals" and "Jocks") in identification with conventional crowds. Multivariate latent growth curve analyses indicated that stronger and more persistent identifications with nonconventional crowds were generally associated with more problem behaviors throughout adolescence. In contrast, stronger and more persistent identifications with conventional crowds were generally associated with fewer problem behaviors throughout adolescence with the notable exception of Brains, who showed a mixed pattern. Though characterized by fewer externalizing problems, this group did report more anxiety problems. These findings and their implications are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Longoni, Juliano N.; Lopes, Beatriz M.; Freires, Irlan A.; Dutra, Kamile L.; Franco, Ademir; Paranhos, Luiz R.
2017-01-01
Objective: The present study aimed to review the literature systematically and assess comparatively whether self-ligating metallic brackets accumulate less Streptococcus mutans biofilm than conventional metallic brackets. Material and methods: The systematic search was performed following PRISMA guidelines and registration in PROSPERO. Seven electronic databases (Google Scholar, LILACS, Open Grey, PubMed, SciELO, ScienceDirect, and Scopus) were consulted until April 2016, with no restriction of language and time of publication. Only randomized clinical studies verifying S. mutans colonization in metallic brackets (self-ligating and conventional) were included. All steps were performed independently by two operators. Results: The search resulted in 546 records obtained from the electronic databases. Additionally, 216 references obtained from the manual search of eligible articles were assessed. Finally, a total of 5 studies were included in the qualitative synthesis. In 1 study, the total bacterial count was not different among self-ligating and conventional brackets, whereas in 2 studies the amount was lower for self-ligating brackets. Regarding the specific count of S. mutans, 2 studies showed less accumulation in self-ligating than in conventional brackets. Conclusion: Based on the limited evidence, self-ligating metallic brackets accumulate less S. mutans than conventional ones. However, these findings must be interpreted in conjunction with particularities individual for each patient – such as hygiene and dietary habits, which are components of the multifactorial environment that enables S. Mutans to proliferate and keep retained in the oral cavity. PMID:29279684
Adams, Denise; Wu, Taixiang; Yang, Xunzhe; Tai, Shusheng; Vohra, Sunita
2009-10-07
Chronic fatigue is increasingly common. Conventional medical care is limited in treating chronic fatigue, leading some patients to use traditional Chinese medicine therapies, including herbal medicine. To assess the effectiveness of traditional Chinese medicine herbal products in treating idiopathic chronic fatigue and chronic fatigue syndrome. The following databases were searched for terms related to traditional Chinese medicine, chronic fatigue, and clinical trials: CCDAN Controlled Trials Register (July 2009), MEDLINE (1966-2008), EMBASE (1980-2008), AMED (1985-2008), CINAHL (1982-2008), PSYCHINFO (1985-2008), CENTRAL (Issue 2 2008), the Chalmers Research Group PedCAM Database (2004), VIP Information (1989-2008), CNKI (1976-2008), OCLC Proceedings First (1992-2008), Conference Papers Index (1982-2008), and Dissertation Abstracts (1980-2008). Reference lists of included studies and review articles were examined and experts in the field were contacted for knowledge of additional studies. Selection criteria included published or unpublished randomized controlled trials (RCTs) of participants diagnosed with idiopathic chronic fatigue or chronic fatigue syndrome comparing traditional Chinese medicinal herbs with placebo, conventional standard of care (SOC), or no treatment/wait lists. The outcome of interest was fatigue. 13 databases were searched for RCTs investigating TCM herbal products for the treatment of chronic fatigue. Over 2400 references were located. Studies were screened and assessed for inclusion criteria by two authors. No studies that met all inclusion criteria were identified. Although studies examining the use of TCM herbal products for chronic fatigue were located, methodologic limitations resulted in the exclusion of all studies. Of note, many of the studies labelled as RCTs and conducted in China did not utilize rigorous randomization procedures. Improvements in methodology in future studies is required for meaningful synthesis of data.
Arnold, Sina; Koletsi, Despina; Patcas, Raphael; Eliades, Theodore
2016-11-01
This systematic review aimed to critically appraise the evidence regarding the effect of bracket ligation type on the periodontal conditions of adolescents undergoing orthodontic treatment. Search terms included randomized controlled trial (RCTs), controlled clinical trials, ligation, bracket, periodontal, inflammation. Risk of bias assessment was made using the Cochrane risk of bias tool and the quality of evidence was assessed with GRADE. Electronic Database search of published and unpublished literature was performed without language restriction in May 25, 2016 (MEDLINE via Pubmed, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Clinical Trials.gov and National Research Register). Of 140 articles initially retrieved, 8 were eligible for inclusion in the systematic review, while 4 RCTs with unclear risk of bias were included in the quantitative synthesis, all comparing self-ligating to conventional steel ligated brackets. Random effects meta-analyses were implemented. At 4-6 weeks after bracket placement there was no evidence to support the use of either type of bracket for achieving improved plaque- (PI) and gingival index (GI). At 3-6 months, there was scarce evidence of greater PI increase for conventional brackets. GI and pocket depth pooled estimates did not reveal significant differences between the two systems. The quality of the evidence was moderate according to GRADE for all outcomes. Overall, non-significant differences on the periodontal status of adolescents undergoing orthodontic treatment with either conventional or self-ligating brackets were detected. The periodontal status of adolescents undergoing orthodontic treatment is of considerable importance. The synthesis of the available evidence on oral hygiene related factors will provide insights to best clinical practice during the course of orthodontic treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.
Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray
2003-07-01
The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.
Imaging of Dentoalveolar and Jaw Trauma.
Alimohammadi, Reyhaneh
2018-01-01
Prior to the invention of cone beam CT, use of 2-D plain film imaging for trauma involving the mandible was common practice, with CT imaging opted for in cases of more complex situations, especially in the maxilla and related structures. Cone beam CT has emerged as a reasonable and reliable alternative considering radiation dosage, image quality, and comfort for the patient. This article presents an overview of the patterns of dental and maxillofacial fractures using conventional and advanced imaging techniques illustrated with multiple clinical examples selected from the author's oral and maxillofacial radiology practice database. Published by Elsevier Inc.
Zhao, Lei; Guo, Yi; Wang, Wei; Yan, Li-juan
2011-08-01
To evaluate the effectiveness of acupuncture as a treatment for neurovascular headache and to analyze the current situation related to acupuncture treatment. PubMed database (1966-2010), EMBASE database (1986-2010), Cochrane Library (Issue 1, 2010), Chinese Biomedical Literature Database (1979-2010), China HowNet Knowledge Database (1979-2010), VIP Journals Database (1989-2010), and Wanfang database (1998-2010) were retrieved. Randomized or quasi-randomized controlled studies were included. The priority was given to high-quality randomized, controlled trials. Statistical outcome indicators were measured using RevMan 5.0.20 software. A total of 16 articles and 1 535 cases were included. Meta-analysis showed a significant difference between the acupuncture therapy and Western medicine therapy [combined RR (random efficacy model)=1.46, 95% CI (1.21, 1.75), Z=3.96, P<0.0001], indicating an obvious superior effect of the acupuncture therapy; significant difference also existed between the comprehensive acupuncture therapy and acupuncture therapy alone [combined RR (fixed efficacy model)=3.35, 95% CI (1.92, 5.82), Z=4.28, P<0.0001], indicating that acupuncture combined with other therapies, such as points injection, scalp acupuncture, auricular acupuncture, etc., were superior to the conventional body acupuncture therapy alone. The inclusion of limited clinical studies had verified the efficacy of acupuncture in the treatment of neurovascular headache. Although acupuncture or its combined therapies provides certain advantages, most clinical studies are of small sample sizes. Large sample size, randomized, controlled trials are needed in the future for more definitive results.
Embracing the quantum limit in silicon computing.
Morton, John J L; McCamey, Dane R; Eriksson, Mark A; Lyon, Stephen A
2011-11-16
Quantum computers hold the promise of massive performance enhancements across a range of applications, from cryptography and databases to revolutionary scientific simulation tools. Such computers would make use of the same quantum mechanical phenomena that pose limitations on the continued shrinking of conventional information processing devices. Many of the key requirements for quantum computing differ markedly from those of conventional computers. However, silicon, which plays a central part in conventional information processing, has many properties that make it a superb platform around which to build a quantum computer. © 2011 Macmillan Publishers Limited. All rights reserved
SIDS-toADF File Mapping Manual
NASA Technical Reports Server (NTRS)
McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)
2002-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Bonini-Rocha, Ana Clara; de Andrade, Anderson Lúcio Souza; Moraes, André Marques; Gomide Matheus, Liana Barbaresco; Diniz, Leonardo Rios; Martins, Wagner Rodrigues
2018-04-01
Several interventions have been proposed to rehabilitate patients with neurologic dysfunctions due to stroke. However, the effectiveness of circuit-based exercises according to its actual definition, ie, an overall program to improve strength, stamina, balance or functioning, was not provided. To examine the effectiveness of circuit-based exercise in the treatment of people affected by stroke. A search through PubMed, Embase, Cochrane Library, and Physiotherapy Evidence Database databases was performed to identify controlled clinical trials without language or date restriction. The overall mean difference with 95% confidence interval was calculated for all outcomes. Two independent reviewers assessed the risk of bias. Eleven studies met the inclusion criteria, and 8 presented suitable data to perform a meta-analysis. Quantitative analysis showed that circuit-based exercise was more effective than conventional intervention on gait speed (mean difference of 0.11 m/s) and circuit-based exercise was not significantly more effective than conventional intervention on balance and functional mobility. Our results demonstrated that circuit-based exercise presents better effects on gait when compared with conventional intervention and that its effects on balance and functional mobility were not better than conventional interventions. I. Copyright © 2018 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Scientometrics of drug discovery efforts: pain-related molecular targets.
Kissin, Igor
2015-01-01
The aim of this study was to make a scientometric assessment of drug discovery efforts centered on pain-related molecular targets. The following scientometric indices were used: the popularity index, representing the share of articles (or patents) on a specific topic among all articles (or patents) on pain over the same 5-year period; the index of change, representing the change in the number of articles (or patents) on a topic from one 5-year period to the next; the index of expectations, representing the ratio of the number of all types of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed over a 5-year period; the total number of articles representing Phase I-III trials of investigational drugs over a 5-year period; and the trial balance index, a ratio of Phase I-II publications to Phase III publications. Articles (PubMed database) and patents (US Patent and Trademark Office database) on 17 topics related to pain mechanisms were assessed during six 5-year periods from 1984 to 2013. During the most recent 5-year period (2009-2013), seven of 17 topics have demonstrated high research activity (purinergic receptors, serotonin, transient receptor potential channels, cytokines, gamma aminobutyric acid, glutamate, and protein kinases). However, even with these seven topics, the index of expectations decreased or did not change compared with the 2004-2008 period. In addition, publications representing Phase I-III trials of investigational drugs (2009-2013) did not indicate great enthusiasm on the part of the pharmaceutical industry regarding drugs specifically designed for treatment of pain. A promising development related to the new tool of molecular targeting, ie, monoclonal antibodies, for pain treatment has not yet resulted in real success. This approach has not yet demonstrated clinical effectiveness (at least with nerve growth factor) much beyond conventional analgesics, when its potential cost is more than an order of magnitude higher than that of conventional treatments. This scientometric assessment demonstrated a lack of real breakthrough developments.
Scientometrics of drug discovery efforts: pain-related molecular targets
Kissin, Igor
2015-01-01
The aim of this study was to make a scientometric assessment of drug discovery efforts centered on pain-related molecular targets. The following scientometric indices were used: the popularity index, representing the share of articles (or patents) on a specific topic among all articles (or patents) on pain over the same 5-year period; the index of change, representing the change in the number of articles (or patents) on a topic from one 5-year period to the next; the index of expectations, representing the ratio of the number of all types of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed over a 5-year period; the total number of articles representing Phase I–III trials of investigational drugs over a 5-year period; and the trial balance index, a ratio of Phase I–II publications to Phase III publications. Articles (PubMed database) and patents (US Patent and Trademark Office database) on 17 topics related to pain mechanisms were assessed during six 5-year periods from 1984 to 2013. During the most recent 5-year period (2009–2013), seven of 17 topics have demonstrated high research activity (purinergic receptors, serotonin, transient receptor potential channels, cytokines, gamma aminobutyric acid, glutamate, and protein kinases). However, even with these seven topics, the index of expectations decreased or did not change compared with the 2004–2008 period. In addition, publications representing Phase I–III trials of investigational drugs (2009–2013) did not indicate great enthusiasm on the part of the pharmaceutical industry regarding drugs specifically designed for treatment of pain. A promising development related to the new tool of molecular targeting, ie, monoclonal antibodies, for pain treatment has not yet resulted in real success. This approach has not yet demonstrated clinical effectiveness (at least with nerve growth factor) much beyond conventional analgesics, when its potential cost is more than an order of magnitude higher than that of conventional treatments. This scientometric assessment demonstrated a lack of real breakthrough developments. PMID:26170624
Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel
2008-01-01
Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553
Yang, Wei; Xie, Yanming; Zhuang, Yan
2011-10-01
There are many kinds of Chinese traditional patent medicine used in clinical practice and many adverse events have been reported by clinical professionals. Chinese patent medicine's safety problems are the most concerned by patients and physicians. At present, many researchers have studied re-evaluation methods about post marketing Chinese medicine safety inside and outside China. However, it is rare that using data from hospital information system (HIS) to re-evaluating post marketing Chinese traditional patent medicine safety problems. HIS database in real world is a good resource with rich information to research medicine safety. This study planed to analyze HIS data selected from ten top general hospitals in Beijing, formed a large HIS database in real world with a capacity of 1 000 000 cases in total after a series of data cleaning and integrating procedures. This study could be a new project that using information to evaluate traditional Chinese medicine safety based on HIS database. A clear protocol has been completed as for the first step for the whole study. The protocol is as follows. First of all, separate each of the Chinese traditional patent medicines existing in the total HIS database as a single database. Secondly, select some related laboratory tests indexes as the safety evaluating outcomes, such as routine blood, routine urine, feces routine, conventional coagulation, liver function, kidney function and other tests. Thirdly, use the data mining method to analyze those selected safety outcomes which had abnormal change before and after using Chinese patent medicines. Finally, judge the relationship between those abnormal changing and Chinese patent medicine. We hope this method could imply useful information to Chinese medicine researchers interested in safety evaluation of traditional Chinese medicine.
An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.
Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K
2007-08-01
To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into multiple data infrastructures, as has been demonstrated through EDAC's testing and deployment of metadata into multiple external systems: Data.Gov, the GEOSS Registry, the DataONE network, the DSpace based institutional repository at UNM and semantic mediation systems developed as part of the NASA ACCESS ELSeWEB project. Each of these systems requires valid metadata as a first step, but to make most effective use of the delivered metadata each also has a set of conventions that are specific to the system. This presentation will provide an overview of the underlying metadata management model, the processes and web services that have been developed to automatically generate metadata in a variety of standard formats and highlight some of the specific modifications made to the output metadata content to support the different conventions used by the multiple metadata integration endpoints.
Cuevas, Francisco Julián; Moreno-Rojas, José Manuel; Ruiz-Moreno, María José
2017-04-15
A targeted approach using HS-SPME-GC-MS was performed to compare flavour compounds of 'Navelina' and 'Salustiana' orange cultivars from organic and conventional management systems. Both varieties of conventional oranges showed higher content of ester compounds. On the other hand, higher content of some compounds related with the geranyl-diphosphate pathway (neryl and geranyl acetates) and some terpenoids were found in the organic samples. Furthermore, the partial least square discriminant analysis (PLS-DA) achieved an effective classification for oranges based on the farming system using their volatile profiles (90 and 100% correct classification). To our knowledge, it is the first time that a comparative study dealing with farming systems and orange aroma profile has been performed. These new insights, taking into account local databases, cultivars and advanced analytical tools, highlight the potential of volatile composition for organic orange discrimination. Copyright © 2016 Elsevier Ltd. All rights reserved.
Point spread function engineering for iris recognition system design.
Ashok, Amit; Neifeld, Mark A
2010-04-01
Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.
Uddin, Md Jamal; Groenwold, Rolf H H; de Boer, Anthonius; Gardarsdottir, Helga; Martin, Elisa; Candore, Gianmario; Belitser, Svetlana V; Hoes, Arno W; Roes, Kit C B; Klungel, Olaf H
2016-03-01
Instrumental variable (IV) analysis can control for unmeasured confounding, yet it has not been widely used in pharmacoepidemiology. We aimed to assess the performance of IV analysis using different IVs in multiple databases in a study of antidepressant use and hip fracture. Information on adults with at least one prescription of a selective serotonin reuptake inhibitor (SSRI) or tricyclic antidepressant (TCA) during 2001-2009 was extracted from the THIN (UK), BIFAP (Spain), and Mondriaan (Netherlands) databases. IVs were created using the proportion of SSRI prescriptions per practice or using the one, five, or ten previous prescriptions by a physician. Data were analysed using conventional Cox regression and two-stage IV models. In the conventional analysis, SSRI (vs. TCA) was associated with an increased risk of hip fracture, which was consistently found across databases: the adjusted hazard ratio (HR) was approximately 1.35 for time-fixed and 1.50 to 2.49 for time-varying SSRI use, while the IV analysis based on the IVs that appeared to satisfy the IV assumptions showed conflicting results, e.g. the adjusted HRs ranged from 0.55 to 2.75 for time-fixed exposure. IVs for time-varying exposure violated at least one IV assumption and were therefore invalid. This multiple database study shows that the performance of IV analysis varied across the databases for time-fixed and time-varying exposures and strongly depends on the definition of IVs. It remains challenging to obtain valid IVs in pharmacoepidemiological studies, particularly for time-varying exposure, and IV analysis should therefore be interpreted cautiously. Copyright © 2016 John Wiley & Sons, Ltd.
Ng, Jeremy Y; Boon, Heather S; Thompson, Alison K; Whitehead, Cynthia R
2016-05-20
Medical pluralism has flourished throughout the Western world in spite of efforts to legitimize Western biomedical healthcare as "conventional medicine" and thereby relegate all non-physician-related forms of healthcare to an "other" category. These "other" practitioners have been referred to as "unconventional", "alternative" and "complementary", among other terms throughout the past half century. This study investigates the discourses surrounding the changes in the terms, and their meanings, used to describe unconventional medicine in North America. Terms identified by the literature as synonymous to unconventional medicine were searched using the Scopus database. A textual analysis following the method described by Kripendorff 2013 was subsequently performed on the five most highly-cited unconventional medicine-related peer-reviewed literature published between 1970 and 2013. Five commonly-used, unconventional medicine-related terms were identified. Authors using "complementary and alternative", "complementary", "alternative", or "unconventional" tended to define them by what they are not (e.g., therapies not taught/used in conventional medicine, therapy demands not met by conventional medicine, and therapies that lack research on safety, efficacy and effectiveness). Authors defined "integrated/integrative" medicine by what it is (e.g., a new model of healthcare, the combining of both conventional and unconventional therapies, accounting for the whole person, and preventative maintenance of health). Authors who defined terms by "what is not" stressed that the purpose of conducting research in this area was solely to create knowledge. Comparatively, authors who defined terms by "what is" sought to advocate for the evidence-based combination of unconventional and conventional medicines. Both author groups used scientific rhetoric to define unconventional medical practices. This emergence of two groups of authors who used two different sets of terms to refer to the concept of "unconventional medicine" may explain why some journals, practitioner associations and research/practice centres may choose to use both "what is not" and "what is" terms in their discourse to attract interest from both groups. Since each of the two groups of terms (and authors who use them) has different meanings and goals, the evolution of this discourse will continue to be an interesting phenomenon to explore in the future.
Wang, Shi-Jun; Xu, Juan; Gong, Dan-Dan; Man, Chang-Feng; Fan, Yu
2013-10-14
To assess the effectiveness of oral Chinese herbal medicine (CHM) in relieving pain secondary to bone metastases in patients. The searched electronic literature databases included both English and Chinese articles published in the MEDLINE, EMBASE, Wanfang database and China National Knowledge Infrastructure (up to December 2012). The studies included randomized controlled trials (RCTs) comparing CHM plus conventional treatment with conventional treatment alone for patients with pain secondary to bone metastases. The outcomes were the odds ratio (OR) with 95% confidence intervals (CI) for the pain-relief rate and adverse events. A total of 16 RCTs involving 1,008 patients were identified and analyzed. All of the included RCTs were associated with a moderate to high risk of bias. In the metaanalysis, CHM plus conventional treatment increased the pain-relief rate compared with the conventional treatment alone (OR, 2.59; 95% CI 1.95 to 3.45). In subgroup analysis, the pooled OR of the pain-relief rate of CHM plus conventional treatment compared with conventional treatment was 3.11 (95% CI 2.01 to 4.79) for CHM plus bisphosphonates, 2.24 (95% CI 1.33 to 3.78) for CHM plus analgesics, 2.28 (95% CI 1.09 to 4.79) for CHM plus radiotherapy, and 2.22 (95% CI 0.95 to 5.15) for CHM plus analgesics and bisphosphonates. The adverse events included nausea, vomiting, dizziness, fever, and constipation. No serious adverse events were reported in any of the included studies. CHM interventions appear to have beneficial effects on pain secondary to bone metastases in patients. However, published efficacy trials are small in size to draw any firm conclusions.
Genealogical databases as a tool for extending follow-up in clinical reviews.
Ho, Thuy-Van; Chowdhury, Naweed; Kandl, Christopher; Hoover, Cindy; Robinson, Ann; Hoover, Larry
2016-08-01
Long-term follow-up in clinical reviews often presents significant difficulty with conventional medical records alone. Publicly accessible genealogical databases such as Ancestry.com provide another avenue for obtaining extended follow-up and added outcome information. No previous studies have described the use of genealogical databases in the follow-up of individual patients. Ancestry.com, the largest genealogical database in the United States, houses extensive demographic data on an increasing number of Americans. In a recent retrospective review of esthesioneuroblastoma patients treated at our institution, we used this resource to ascertain the outcomes of patients otherwise lost to follow-up. Additional information such as quality of life and supplemental treatments the patient may have received at home was obtained through direct contact with living relatives. The use of Ancestry.com resulted in a 25% increase (20 months) in follow-up duration as well as incorporation of an additional 7 patients in our study (18%) who would otherwise not have had adequate hospital chart data for inclusion. Many patients within this subset had more advanced disease or were remotely located from our institution. As such, exclusion of these outliers can impact the quality of subsequent outcome analysis. Online genealogical databases provide a unique resource of public information that is acceptable to institutional review boards for patient follow-up in clinical reviews. Utilization of Ancestry.com data led to significant improvement in follow-up duration and increased the number of patients with sufficient data that could be included in our retrospective study. © 2016 ARS-AAOA, LLC.
Meta-Analysis of Massage Therapy on Cancer Pain.
Lee, Sook-Hyun; Kim, Jong-Yeop; Yeo, Sujung; Kim, Sung-Hoon; Lim, Sabina
2015-07-01
Cancer pain is the most common complaint among patients with cancer. Conventional treatment does not always relieve cancer pain satisfactorily. Therefore, many patients with cancer have turned to complementary therapies to help them with their physical, emotional, and spiritual well-being. Massage therapy is increasingly used for symptom relief in patients with cancer. The current study aimed to investigate by meta-analysis the effects of massage therapy for cancer patients experiencing pain. Nine electronic databases were systematically searched for studies published through August 2013 in English, Chinese, and Korean. Methodological quality was assessed using the Physiotherapy Evidence Database (PEDro) and Cochrane risk-of-bias scales. Twelve studies, including 559 participants, were used in the meta-analysis. In 9 high-quality studies based on the PEDro scale (standardized mean difference, -1.24; 95% confidence interval, -1.72 to -0.75), we observed reduction in cancer pain after massage. Massage therapy significantly reduced cancer pain compared with no massage treatment or conventional care (standardized mean difference, -1.25; 95% confidence interval, -1.63 to -0.87). Our results indicate that massage is effective for the relief of cancer pain, especially for surgery-related pain. Among the various types of massage, foot reflexology appeared to be more effective than body or aroma massage. Our meta-analysis indicated a beneficial effect of massage for relief of cancer pain. Further well-designed, large studies with longer follow-up periods are needed to be able to draw firmer conclusions regarding the effectiveness. © The Author(s) 2015.
Cowan, Nelson
2015-07-01
Miller's (1956) article about storage capacity limits, "The Magical Number Seven Plus or Minus Two . . .," is one of the best-known articles in psychology. Though influential in several ways, for about 40 years it was oddly followed by rather little research on the numerical limit of capacity in working memory, or on the relation between 3 potentially related phenomena that Miller described. Given that the article was written in a humorous tone and was framed around a tongue-in-cheek premise (persecution by an integer), I argue that it may have inadvertently stymied progress on these topics as researchers attempted to avoid ridicule. This commentary relates some correspondence with Miller on his article and concludes with a call to avoid self-censorship of our less conventional ideas. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Dietary assessment and self-monitoring with nutrition applications for mobile devices.
Lieffers, Jessica R L; Hanning, Rhona M
2012-01-01
Nutrition applications for mobile devices (e.g., personal digital assistants, smartphones) are becoming increasingly accessible and can assist with the difficult task of intake recording for dietary assessment and self-monitoring. This review is a compilation and discussion of research on this tool for dietary intake documentation in healthy populations and those trying to lose weight. The purpose is to compare this tool with conventional methods (e.g., 24-hour recall interviews, paper-based food records). Research databases were searched from January 2000 to April 2011, with the following criteria: healthy or weight loss populations, use of a mobile device nutrition application, and inclusion of at least one of three measures, which were the ability to capture dietary intake in comparison with conventional methods, dietary self-monitoring adherence, and changes in anthropometrics and/or dietary intake. Eighteen studies are discussed. Two application categories were identified: those with which users select food and portion size from databases and those with which users photograph their food. Overall, positive feedback was reported with applications. Both application types had moderate to good correlations for assessing energy and nutrient intakes in comparison with conventional methods. For self-monitoring, applications versus conventional techniques (often paper records) frequently resulted in better self-monitoring adherence, and changes in dietary intake and/or anthropometrics. Nutrition applications for mobile devices have an exciting potential for use in dietetic practice.
Jo, Junyoung; Leem, Jungtae; Lee, Jin Moo; Park, Kyoung Sun
2017-06-15
Primary dysmenorrhoea is menstrual pain without pelvic pathology and is the most common gynaecological condition in women. Xuefu Zhuyudecoction (XZD) or Hyeolbuchukeo-tang, a traditional herbal formula, has been used as a treatment for primary dysmenorrhoea. The purpose of this study is to assess the current published evidence regarding XZD as treatment for primary dysmenorrhoea. The following databases will be searched from their inception until April 2017: MEDLINE (via PubMed), Allied and Complementary Medicine Database (AMED), EMBASE, The Cochrane Library, six Korean medical databases (Korean Studies Information Service System, DBPia, Oriental Medicine Advanced Searching Integrated System, Research Information Service System, Korea Med and the Korean Traditional Knowledge Portal), three Chinese medical databases (China National Knowledge Infrastructure (CNKI), Wan Fang Database and Chinese Scientific Journals Database (VIP)) and one Japanese medical database (CiNii). Randomised clinical trials (RCTs) that will be included in this systematic review comprise those that used XZD or modified XZD. The control groups in the RCTs include no treatment, placebo, conventional medication or other treatments. Trials testing XZD as an adjunct to other treatments and studies where the control group received the same treatment as the intervention group will be also included. Data extraction and risk of bias assessments will be performed by two independent reviewers. The risk of bias will be assessed with the Cochrane risk of bias tool. All statistical analyses will be conducted using Review Manager software (RevMan V.5.3.0). This systematic review will be published in a peer-reviewed journal. The review will also be disseminated electronically and in print. The review will benefit patients and practitioners in the fields of traditional and conventional medicine. CRD42016050447. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Takashima, Kayoko; Mizukawa, Yumiko; Morishita, Katsumi; Okuyama, Manabu; Kasahara, Toshihiko; Toritsuka, Naoki; Miyagishima, Toshikazu; Nagao, Taku; Urushidani, Tetsuro
2006-05-08
The Toxicogenomics Project is a 5-year collaborative project by the Japanese government and pharmaceutical companies in 2002. Its aim is to construct a large-scale toxicology database of 150 compounds orally administered to rats. The test consists of a single administration test (3, 6, 9 and 24 h) and a repeated administration test (3, 7, 14 and 28 days), and the conventional toxicology data together with the gene expression data in liver as analyzed by using Affymetrix GeneChip are being accumulated. In the project, either methylcellulose or corn oil is employed as vehicle. We examined whether the vehicle itself affects the analysis of gene expression and found that corn oil alone affected the food consumption and biochemical parameters mainly related to lipid metabolism, and this accompanied typical changes in the gene expression. Most of the genes modulated by corn oil were related to cholesterol or fatty acid metabolism (e.g., CYP7A1, CYP8B1, 3-hydroxy-3-methylglutaryl-Coenzyme A reductase, squalene epoxidase, angiopoietin-like protein 4, fatty acid synthase, fatty acid binding proteins), suggesting that the response was physiologic to the oil intake. Many of the lipid-related genes showed circadian rhythm within a day, but the expression pattern of general clock genes (e.g., period 2, arylhydrocarbon nuclear receptor translocator-like, D site albumin promoter binding protein) were unaffected by corn oil, suggesting that the effects are specific for lipid metabolism. These results would be useful for usage of the database especially when drugs with different vehicle control are compared.
A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES
Reproductive toxicogenomic studies generate large amounts of toxicological and genomic data. On the toxicology side, a substantial quantity of data accumulates from conventional endpoints such as histology, reproductive physiology and biochemistry. The largest source of genomics...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...
Vocational interests in the United States: Sex, age, ethnicity, and year effects.
Morris, Michael L
2016-10-01
Vocational interests predict educational and career choices, job performance, and career success (Rounds & Su, 2014). Although sex differences in vocational interests have long been observed (Thorndike, 1911), an appropriate overall measure has been lacking from the literature. Using a cross-sectional sample of United States residents aged 14 to 63 who completed the Strong Interest Inventory assessment between 2005 and 2014 (N = 1,283,110), I examined sex, age, ethnicity, and year effects on work related interest levels using both multivariate and univariate effect size estimates of individual dimensions (Holland's Realistic, Investigative, Artistic, Social, Enterprising, and Conventional). Men scored higher on Realistic (d = -1.14), Investigative (d = -.32), Enterprising (d = -.22), and Conventional (d = -.23), while women scored higher on Artistic (d = .19) and Social (d = .38), mostly replicating previous univariate findings. Multivariate, overall sex differences were very large (disattenuated Mahalanobis' D = 1.61; 27% overlap). Interest levels were slightly lower and overall sex differences larger in younger samples. Overall sex differences have narrowed slightly for 18-22 year-olds in more recent samples. Generally very small ethnicity effects included relatively higher Investigative and Enterprising scores for Asians, Indians, and Middle Easterners, lower Realistic scores for Blacks and Native Americans, higher Realistic, Artistic, and Social scores for Pacific Islanders, and lower Conventional scores for Whites. Using Prediger's (1982) model, women were more interested in people (d = 1.01) and ideas (d = .18), while men were more interested in things and data. These results, consistent with previous reviews showing large sex differences and small year effects, suggest that large sex differences in work related interests will continue to be observed for decades. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E
2015-01-01
Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.
Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2015-01-01
Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
Lee, Wonmok; Kim, Myungsook; Yong, Dongeun; Jeong, Seok Hoon; Lee, Kyungwon; Chong, Yunsop
2015-01-01
By conventional methods, the identification of anaerobic bacteria is more time consuming and requires more expertise than the identification of aerobic bacteria. Although the matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) systems are relatively less studied, they have been reported to be a promising method for the identification of anaerobes. We evaluated the performance of the VITEK MS in vitro diagnostic (IVD; 1.1 database; bioMérieux, France) in the identification of anaerobes. We used 274 anaerobic bacteria isolated from various clinical specimens. The results for the identification of the bacteria by VITEK MS were compared to those obtained by phenotypic methods and 16S rRNA gene sequencing. Among the 249 isolates included in the IVD database, the VITEK MS correctly identified 209 (83.9%) isolates to the species level and an additional 18 (7.2%) at the genus level. In particular, the VITEK MS correctly identified clinically relevant and frequently isolated anaerobic bacteria to the species level. The remaining 22 isolates (8.8%) were either not identified or misidentified. The VITEK MS could not identify the 25 isolates absent from the IVD database to the species level. The VITEK MS showed reliable identifications for clinically relevant anaerobic bacteria.
Miura, Naoki; Kucho, Ken-Ichi; Noguchi, Michiko; Miyoshi, Noriaki; Uchiumi, Toshiki; Kawaguchi, Hiroaki; Tanimoto, Akihide
2014-01-01
The microminipig, which weighs less than 10 kg at an early stage of maturity, has been reported as a potential experimental model animal. Its extremely small size and other distinct characteristics suggest the possibility of a number of differences between the genome of the microminipig and that of conventional pigs. In this study, we analyzed the genomes of two healthy microminipigs using a next-generation sequencer SOLiD™ system. We then compared the obtained genomic sequences with a genomic database for the domestic pig (Sus scrofa). The mapping coverage of sequenced tag from the microminipig to conventional pig genomic sequences was greater than 96% and we detected no clear, substantial genomic variance from these data. The results may indicate that the distinct characteristics of the microminipig derive from small-scale alterations in the genome, such as Single Nucleotide Polymorphisms or translational modifications, rather than large-scale deletion or insertion polymorphisms. Further investigation of the entire genomic sequence of the microminipig with methods enabling deeper coverage is required to elucidate the genetic basis of its distinct phenotypic traits. Copyright © 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Treating refractory obsessive-compulsive disorder: what to do when conventional treatment fails?
Franz, Adelar Pedro; Paim, Mariana; Araújo, Rafael Moreno de; Rosa, Virgínia de Oliveira; Barbosa, Ísis Mendes; Blaya, Carolina; Ferrão, Ygor Arzeno
2013-01-01
Obsessive-compulsive disorder (OCD) is a chronic and impairing condition. A very small percentage of patients become asymptomatic after treatment. The purpose of this paper was to review the alternative therapies available for OCD when conventional treatment fails. Data were extracted from controlled clinical studies (evidence-based medicine) published on the MEDLINE and Science Citation Index/Web of Science databases between 1975 and 2012. Findings are discussed and suggest that clinicians dealing with refractory OCD patients should: 1) review intrinsic phenomenological aspects of OCD, which could lead to different interpretations and treatment choices; 2) review extrinsic phenomenological aspects of OCD, especially family accommodation, which may be a risk factor for non-response; 3) consider non-conventional pharmacological approaches; 4) consider non-conventional psychotherapeutic approaches; and 5) consider neurobiological approaches.
Cloud, Joann L; Conville, Patricia S; Croft, Ann; Harmsen, Dag; Witebsky, Frank G; Carroll, Karen C
2004-02-01
Identification of clinically significant nocardiae to the species level is important in patient diagnosis and treatment. A study was performed to evaluate Nocardia species identification obtained by partial 16S ribosomal DNA (rDNA) sequencing by the MicroSeq 500 system with an expanded database. The expanded portion of the database was developed from partial 5' 16S rDNA sequences derived from 28 reference strains (from the American Type Culture Collection and the Japanese Collection of Microorganisms). The expanded MicroSeq 500 system was compared to (i). conventional identification obtained from a combination of growth characteristics with biochemical and drug susceptibility tests; (ii). molecular techniques involving restriction enzyme analysis (REA) of portions of the 16S rRNA and 65-kDa heat shock protein genes; and (iii). when necessary, sequencing of a 999-bp fragment of the 16S rRNA gene. An unknown isolate was identified as a particular species if the sequence obtained by partial 16S rDNA sequencing by the expanded MicroSeq 500 system was 99.0% similar to that of the reference strain. Ninety-four nocardiae representing 10 separate species were isolated from patient specimens and examined by using the three different methods. Sequencing of partial 16S rDNA by the expanded MicroSeq 500 system resulted in only 72% agreement with conventional methods for species identification and 90% agreement with the alternative molecular methods. Molecular methods for identification of Nocardia species provide more accurate and rapid results than the conventional methods using biochemical and susceptibility testing. With an expanded database, the MicroSeq 500 system for partial 16S rDNA was able to correctly identify the human pathogens N. brasiliensis, N. cyriacigeorgica, N. farcinica, N. nova, N. otitidiscaviarum, and N. veterana.
The current structure of key actors involved in research on land and soil degradation
NASA Astrophysics Data System (ADS)
Escadafal, Richard; Barbero, Celia; Exbrayat, Williams; Marques, Maria Jose; Ruiz, Manuel; El Haddadi, Anass; Akhtar-Schuster, Mariam
2013-04-01
Land and soil conservation topics, the final mandate of the United Convention to Combat desertification in drylands, have been diagnosed as still suffering from a lack of guidance. On the contrary, climate change and biodiversity issues -the other two big subjects of the Rio Conventions- seem to progress and may benefit from the advice of international panels. Arguably the weakness of policy measures and hence the application of scientific knowledge by land users and stakeholders could be the expression of an inadequate research organization and a lack of ability to channel their findings. In order to better understand the size, breadth and depth of the scientific communities involved in providing advice to this convention and to other bodies, this study explores the corpus of international publications dealing with land and/or with soils. A database of several thousands records including a significant part of the literature published so far was performed using the Web of Science and other socio-economic databases such as FRANCIS and CAIRN. We extracted hidden information using bibliometric methods and data mining applied to these scientific publications to map the key actors (laboratories, teams, institutions) involved in research on land and on soils. Several filters were applied to the databases in combination with the word "desertification". The further use of Tetralogie software merges databases, analyses similarities and differences between keywords, disciplines, authors and regions and identifies obvious clusters. Assessing their commonalities and differences, the visualisation of links and gaps between scientists, organisations, policymakers and other stakeholders is possible. The interpretation of the 'clouds' of disciplines, keywords, and techniques will enhance the understanding of interconnections between them; ultimately this will allow diagnosing some of their strengths and weaknesses. This may help explain why land and soil degradation remains a serious global problem that lacks sufficient attention. We hope that this study will contribute to clarify the scientific landscape at stake to remediate possible weaknesses in the future.
Rowan, Courtney M; Loomis, Ashley; McArthur, Jennifer; Smith, Lincoln S; Gertz, Shira J; Fitzgerald, Julie C; Nitu, Mara E; Moser, Elizabeth As; Hsing, Deyin D; Duncan, Christine N; Mahadeo, Kris M; Moffet, Jerelyn; Hall, Mark W; Pinos, Emily L; Tamburro, Robert F; Cheifetz, Ira M
2018-04-01
The effectiveness of high-frequency oscillatory ventilation (HFOV) in the pediatric hematopoietic cell transplant patient has not been established. We sought to identify current practice patterns of HFOV, investigate parameters during HFOV and their association with mortality, and compare the use of HFOV to conventional mechanical ventilation in severe pediatric ARDS. This is a retrospective analysis of a multi-center database of pediatric and young adult allogeneic hematopoietic cell transplant subjects requiring invasive mechanical ventilation for critical illness from 2009 through 2014. Twelve United States pediatric centers contributed data. Continuous variables were compared using a Wilcoxon rank-sum test or a Kruskal-Wallis analysis. For categorical variables, univariate analysis with logistic regression was performed. The database contains 222 patients, of which 85 subjects were managed with HFOV. Of this HFOV cohort, the overall pediatric ICU survival was 23.5% ( n = 20). HFOV survivors were transitioned to HFOV at a lower oxygenation index than nonsurvivors (25.6, interquartile range 21.1-36.8, vs 37.2, interquartile range 26.5-52.2, P = .046). Survivors were transitioned to HFOV earlier in the course of mechanical ventilation, (day 0 vs day 2, P = .002). No subject survived who was transitioned to HFOV after 1 week of invasive mechanical ventilation. We compared subjects with severe pediatric ARDS treated only with conventional mechanical ventilation versus early HFOV (within 2 d of invasive mechanical ventilation) versus late HFOV. There was a trend toward difference in survival (conventional mechanical ventilation 24%, early HFOV 30%, and late HFOV 9%, P = .08). In this large database of pediatric allogeneic hematopoietic cell transplant subjects who had acute respiratory failure requiring invasive mechanical ventilation for critical illness with severe pediatric ARDS, early use of HFOV was associated with improved survival compared to late implementation of HFOV, and the subjects had outcomes similar to those treated only with conventional mechanical ventilation. Copyright © 2018 by Daedalus Enterprises.
Xiao, Di; You, Yuanhai; Bi, Zhenwang; Wang, Haibin; Zhang, Yongchan; Hu, Bin; Song, Yanyan; Zhang, Huifang; Kou, Zengqiang; Yan, Xiaomei; Zhang, Menghan; Jin, Lianmei; Jiang, Xihong; Su, Peng; Bi, Zhenqiang; Luo, Fengji; Zhang, Jianzhong
2013-03-01
There was a dramatic increase in scarlet fever cases in China from March to July 2011. Group A Streptococcus (GAS) is the only pathogen known to cause scarlet fever. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) coupled to Biotyper system was used for GAS identification in 2011. A local reference database (LRD) was constructed, evaluated and used to identify GAS isolates. The 75 GAS strains used to evaluate the LRD were all identified correctly. Of the 157 suspected β-hemolytic strains isolated from 298 throat swab samples, 127 (100%) and 120 (94.5%) of the isolates were identified as GAS by the MALDI-TOF MS system and the conventional bacitracin sensitivity test method, respectively. All 202 (100%) isolates were identified at the species level by searching the LRD, while 182 (90.1%) were identified by searching the original reference database (ORD). There were statistically significant differences with a high degree of credibility at species level (χ(2)=6.052, P<0.05 between the LRD and ORD). The test turnaround time was shortened 36-48h, and the cost of each sample is one-tenth of the cost of conventional methods. Establishing a domestic database is the most effective way to improve the identification efficiency using a MALDI-TOF MS system. MALDI-TOF MS is a viable alternative to conventional methods and may aid in the diagnosis and surveillance of GAS. Copyright © 2013 Elsevier B.V. All rights reserved.
Wang, Kang-Feng; Zhang, Li-Juan; Lu, Feng; Lu, Yong-Hui; Yang, Chuan-Hua
2016-06-01
To provide an evidence-based overview regarding the efficacy of Ashi points stimulation for the treatment of shoulder pain. A comprehensive search [PubMed, Chinese Biomedical Literature Database, China National Knowledge Infrastructure (CNKI), Chongqing Weipu Database for Chinese Technical Periodicals (VIP) and Wanfang Database] was conducted to identify randomized or quasi-randomized controlled trials that evaluated the effectiveness of Ashi points stimulation for shoulder pain compared with conventional treatment. The methodological quality of the included studies was assessed using the Cochrane risk of bias tool. RevMan 5.0 was used for data synthesis. Nine trials were included. Seven studies assessed the effectiveness of Ashi points stimulation on response rate compared with conventional acupuncture. Their results suggested significant effect in favour of Ashi points stimulation [odds ratio (OR): 5.89, 95% confidence interval (CI): 2.97 to 11.67, P<0.01, heterogeneity: χ(2) =3.81, P=0.70, I (2) =0% ]. One trial compared Ashi points stimulation with drug therapy. The result showed there was a significantly greater recovery rate in group of Ashi points stimulation (OR: 9.58, 95% CI: 2.69 to 34.12). One trial compared comprehensive treatment on the myofascial trigger points (MTrPs) with no treatment and the result was in favor of MTrPs. Ashi points stimulation might be superior to conventional acupuncture, drug therapy and no treatment for shoulder pain. However, due to the low methodological quality of included studies, a firm conclusion could not be reached until further studies of high quality are available.
Analysis of human serum phosphopeptidome by a focused database searching strategy.
Zhu, Jun; Wang, Fangjun; Cheng, Kai; Song, Chunxia; Qin, Hongqiang; Hu, Lianghai; Figeys, Daniel; Ye, Mingliang; Zou, Hanfa
2013-01-14
As human serum is an important source for early diagnosis of many serious diseases, analysis of serum proteome and peptidome has been extensively performed. However, the serum phosphopeptidome was less explored probably because the effective method for database searching is lacking. Conventional database searching strategy always uses the whole proteome database, which is very time-consuming for phosphopeptidome search due to the huge searching space resulted from the high redundancy of the database and the setting of dynamic modifications during searching. In this work, a focused database searching strategy using an in-house collected human serum pro-peptidome target/decoy database (HuSPep) was established. It was found that the searching time was significantly decreased without compromising the identification sensitivity. By combining size-selective Ti (IV)-MCM-41 enrichment, RP-RP off-line separation, and complementary CID and ETD fragmentation with the new searching strategy, 143 unique endogenous phosphopeptides and 133 phosphorylation sites (109 novel sites) were identified from human serum with high reliability. Copyright © 2012 Elsevier B.V. All rights reserved.
Reynolds, Caleb J; Conway, Paul
2018-02-01
Moral dilemmas typically entail directly causing harm (said to violate deontological ethics) to maximize overall outcomes (said to uphold utilitarian ethics). The dual process model suggests harm-rejection judgments derive from affective reactions to harm, whereas harm-acceptance judgments derive from cognitive evaluations of outcomes. Recently, Miller, Hannikainen, and Cushman (2014) argued that harm-rejection judgments primarily reflect self-focused-rather than other-focused-emotional responses, because only action aversion (self-focused reactions to the thought of causing harm), not outcome aversion (other-focused reactions to witnessing suffering), consistently predicted dilemma responses. However, they assessed only conventional relative dilemma judgments that treat harm-rejection and outcome-maximization responses as diametric opposites. Instead, we employed process dissociation to assess these response inclinations independently. In two studies (N = 558), we replicated Miller and colleagues' findings for conventional relative judgments, but process dissociation revealed that outcome aversion positively predicted both deontological and utilitarian inclinations-which canceled out for relative judgments. Additionally, individual differences associated with affective processing-psychopathy and empathic concern-correlated with the deontology but not utilitarian parameter. Together, these findings suggest that genuine other-oriented moralized concern for others' well-being contribute to both utilitarian and deontological response tendencies, but these tendencies nonetheless draw upon different psychological processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
Saeidian, Hamdollah; Babri, Mehran; Abdoli, Morteza; Sarabadani, Mansour; Ashrafi, Davood; Naseri, Mohammad Taghi
2012-12-15
The availability of mass spectra and interpretation skills are essential for unambiguous identification of the Chemical Weapons Convention (CWC)-related chemicals. The O(S)-alkyl N,N-dimethyl alkylphosphono(thiolo)thionoamidates are included in the list of scheduled CWC-related compounds, but there are very few spectra from these compounds in the literature. This paper examines these spectra and their mass spectral fragmentation routes. The title chemicals were prepared through microsynthetic protocols and were analyzed using electron ionization mass spectrometry with gas chromatography as a MS-inlet system. Structures of fragments were confirmed using analysis of fragment ions of deuterated analogs, tandem mass spectrometry and density functional theory (DFT) calculations. Mass spectrometric studies revealed some interesting fragmentation pathways during the ionization process, such as alkene and amine elimination and McLafferty-type rearrangements. The most important fragmentation route of the chemicals is the thiono-thiolo rearrangement. DFT calculations are used to support MS results and to reveal relative preference formation of fragment ions. The retention indices (RIs) of all the studied compounds are also reported. Mass spectra of the synthesized compounds were investigated with the aim to enrich the Organization for the Prohibition of Chemical Weapons (OPCW) Central Analytical Database (OCAD) which may be used for detection and identification of CWC-related chemicals during on-site inspection and/or off-site analysis such as OPCW proficiency tests. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sherwood, Owen A.; Schwietzke, Stefan; Arling, Victoria A.; Etiope, Giuseppe
2017-08-01
The concentration of atmospheric methane (CH4) has more than doubled over the industrial era. To help constrain global and regional CH4 budgets, inverse (top-down) models incorporate data on the concentration and stable carbon (δ13C) and hydrogen (δ2H) isotopic ratios of atmospheric CH4. These models depend on accurate δ13C and δ2H end-member source signatures for each of the main emissions categories. Compared with meticulous measurement and calibration of isotopic CH4 in the atmosphere, there has been relatively less effort to characterize globally representative isotopic source signatures, particularly for fossil fuel sources. Most global CH4 budget models have so far relied on outdated source signature values derived from globally nonrepresentative data. To correct this deficiency, we present a comprehensive, globally representative end-member database of the δ13C and δ2H of CH4 from fossil fuel (conventional natural gas, shale gas, and coal), modern microbial (wetlands, rice paddies, ruminants, termites, and landfills and/or waste) and biomass burning sources. Gas molecular compositional data for fossil fuel categories are also included with the database. The database comprises 10 706 samples (8734 fossil fuel, 1972 non-fossil) from 190 published references. Mean (unweighted) δ13C signatures for fossil fuel CH4 are significantly lighter than values commonly used in CH4 budget models, thus highlighting potential underestimation of fossil fuel CH4 emissions in previous CH4 budget models. This living database will be updated every 2-3 years to provide the atmospheric modeling community with the most complete CH4 source signature data possible. Database digital object identifier (DOI): https://doi.org/10.15138/G3201T.
Robino, C; Ralf, A; Pasino, S; De Marchi, M R; Ballantyne, K N; Barbaro, A; Bini, C; Carnevali, E; Casarino, L; Di Gaetano, C; Fabbri, M; Ferri, G; Giardina, E; Gonzalez, A; Matullo, G; Nutini, A L; Onofri, V; Piccinini, A; Piglionica, M; Ponzano, E; Previderè, C; Resta, N; Scarnicci, F; Seidita, G; Sorçaburu-Cigliero, S; Turrina, S; Verzeletti, A; Kayser, M
2015-03-01
Recently introduced rapidly mutating Y-chromosomal short tandem repeat (RM Y-STR) loci, displaying a multiple-fold higher mutation rate relative to any other Y-STRs, including those conventionally used in forensic casework, have been demonstrated to improve the resolution of male lineage differentiation and to allow male relative separation usually impossible with standard Y-STRs. However, large and geographically-detailed frequency haplotype databases are required to estimate the statistical weight of RM Y-STR haplotype matches if observed in forensic casework. With this in mind, the Italian Working Group (GEFI) of the International Society for Forensic Genetics launched a collaborative exercise aimed at generating an Italian quality controlled forensic RM Y-STR haplotype database. Overall 1509 male individuals from 13 regional populations covering northern, central and southern areas of the Italian peninsula plus Sicily were collected, including both "rural" and "urban" samples classified according to population density in the sampling area. A subset of individuals was additionally genotyped for Y-STR loci included in the Yfiler and PowerPlex Y23 (PPY23) systems (75% and 62%, respectively), allowing the comparison of RM and conventional Y-STRs. Considering the whole set of 13 RM Y-STRs, 1501 unique haplotypes were observed among the 1509 sampled Italian men with a haplotype diversity of 0.999996, largely superior to Yfiler and PPY23 with 0.999914 and 0.999950, respectively. AMOVA indicated that 99.996% of the haplotype variation was within populations, confirming that genetic-geographic structure is almost undetected by RM Y-STRs. Haplotype sharing among regional Italian populations was not observed at all with the complete set of 13 RM Y-STRs. Haplotype sharing within Italian populations was very rare (0.27% non-unique haplotypes), and lower in urban (0.22%) than rural (0.29%) areas. Additionally, 422 father-son pairs were investigated, and 20.1% of them could be discriminated by the whole set of 13 RM Y-STRs, which was very close to the theoretically expected estimate of 19.5% given the mutation rates of the markers used. Results obtained from a high-coverage Italian haplotype dataset confirm on the regional scale the exceptional ability of RM Y-STRs to resolve male lineages previously observed globally, and attest the unsurpassed value of RM Y-STRs for male-relative differentiation purposes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
Shao, Huikai; Zhao, Lingguo; Chen, Fuchao; Zeng, Shengbo; Liu, Shengquan; Li, Jiajia
2015-11-29
BACKGROUND In the past decades, a large number of randomized controlled trials (RCTs) on the efficacy of ligustrazine injection combined with conventional antianginal drugs for angina pectoris have been reported. However, these RCTs have not been evaluated in accordance with PRISMA systematic review standards. The aim of this study was to evaluate the efficacy of ligustrazine injection as adjunctive therapy for angina pectoris. MATERIAL AND METHODS The databases PubMed, Medline, Cochrane Library, Embase, Sino-Med, Wanfang Databases, Chinese Scientific Journal Database, Google Scholar, Chinese Biomedical Literature Database, China National Knowledge Infrastructure, and the Chinese Science Citation Database were searched for published RCTs. Meta-analysis was performed on the primary outcome measures, including the improvements of electrocardiography (ECG) and the reductions in angina symptoms. Sensitivity and subgroup analysis based on the M score (the refined Jadad scores) were also used to evaluate the effect of quality, sample size, and publication year of the included RCTs on the overall effect of ligustrazine injection. RESULTS Eleven RCTs involving 870 patients with angina pectoris were selected in this study. Compared with conventional antianginal drugs alone, ligustrazine injection combined with antianginal drugs significantly increased the efficacy in symptom improvement (odds ratio [OR], 3.59; 95% confidence interval [CI]: 2.39 to 5.40) and in ECG improvement (OR, 3.42; 95% CI: 2.33 to 5.01). Sensitivity and subgroup analysis also confirmed that ligustrazine injection had better effect in the treatment of angina pectoris as adjunctive therapy. CONCLUSIONS The 11 eligible RCTs indicated that ligustrazine injection as adjunctive therapy was more effective than antianginal drugs alone. However, due to the low quality of included RCTs, more rigorously designed RCTs were still needed to verify the effects of ligustrazine injection as adjunctive therapy for angina pectoris.
Shao, Huikai; Zhao, Lingguo; Chen, Fuchao; Zeng, Shengbo; Liu, Shengquan; Li, Jiajia
2015-01-01
Background In the past decades, a large number of randomized controlled trials (RCTs) on the efficacy of ligustrazine injection combined with conventional antianginal drugs for angina pectoris have been reported. However, these RCTs have not been evaluated in accordance with PRISMA systematic review standards. The aim of this study was to evaluate the efficacy of ligustrazine injection as adjunctive therapy for angina pectoris. Material/Methods The databases PubMed, Medline, Cochrane Library, Embase, Sino-Med, Wanfang Databases, Chinese Scientific Journal Database, Google Scholar, Chinese Biomedical Literature Database, China National Knowledge Infrastructure, and the Chinese Science Citation Database were searched for published RCTs. Meta-analysis was performed on the primary outcome measures, including the improvements of electrocardiography (ECG) and the reductions in angina symptoms. Sensitivity and subgroup analysis based on the M score (the refined Jadad scores) were also used to evaluate the effect of quality, sample size, and publication year of the included RCTs on the overall effect of ligustrazine injection. Results Eleven RCTs involving 870 patients with angina pectoris were selected in this study. Compared with conventional antianginal drugs alone, ligustrazine injection combined with antianginal drugs significantly increased the efficacy in symptom improvement (odds ratio [OR], 3.59; 95% confidence interval [CI]: 2.39 to 5.40) and in ECG improvement (OR, 3.42; 95% CI: 2.33 to 5.01). Sensitivity and subgroup analysis also confirmed that ligustrazine injection had better effect in the treatment of angina pectoris as adjunctive therapy. Conclusions The 11 eligible RCTs indicated that ligustrazine injection as adjunctive therapy was more effective than antianginal drugs alone. However, due to the low quality of included RCTs, more rigorously designed RCTs were still needed to verify the effects of ligustrazine injection as adjunctive therapy for angina pectoris. PMID:26615387
Casuso-Holgado, María Jesús; Martín-Valero, Rocío; Carazo, Ana F; Medrano-Sánchez, Esther M; Cortés-Vega, M Dolores; Montero-Bancalero, Francisco José
2018-04-01
To evaluate the evidence for the use of virtual reality to treat balance and gait impairments in multiple sclerosis rehabilitation. Systematic review and meta-analysis of randomized controlled trials and quasi-randomized clinical trials. An electronic search was conducted using the following databases: MEDLINE (PubMed), Physiotherapy Evidence Database (PEDro), Cochrane Database of Systematic Reviews (CDSR) and (CINHAL). A quality assessment was performed using the PEDro scale. The data were pooled and a meta-analysis was completed. This systematic review was conducted in accordance with the (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) PRISMA guideline statement. It was registered in the PROSPERO database (CRD42016049360). A total of 11 studies were included. The data were pooled, allowing meta-analysis of seven outcomes of interest. A total of 466 participants clinically diagnosed with multiple sclerosis were analysed. Results showed that virtual reality balance training is more effective than no intervention for postural control improvement (standard mean difference (SMD) = -0.64; 95% confidence interval (CI) = -1.05, -0.24; P = 0.002). However, significant overall effect was not showed when compared with conventional training (SMD = -0.04; 95% CI = -0.70, 0.62; P = 0.90). Inconclusive results were also observed for gait rehabilitation. Virtual reality training could be considered at least as effective as conventional training and more effective than no intervention to treat balance and gait impairments in multiple sclerosis rehabilitation.
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Senna, Kátia Marie Simões e.; Tura, Bernardo Rangel; Goulart, Marcelo Correia
2014-01-01
Objectives The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. Methods A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. Results The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. Conclusions The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation. PMID:25302806
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Simões e Senna, Kátia Marie; Tura, Bernardo Rangel; Correia, Marcelo Goulart; Goulart, Marcelo Correia
2014-01-01
The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation.
Proteomic analysis of Rhodotorula mucilaginosa: dealing with the issues of a non-conventional yeast.
Addis, Maria Filippa; Tanca, Alessandro; Landolfo, Sara; Abbondio, Marcello; Cutzu, Raffaela; Biosa, Grazia; Pagnozzi, Daniela; Uzzau, Sergio; Mannazzu, Ilaria
2016-08-01
Red yeasts ascribed to the species Rhodotorula mucilaginosa are gaining increasing attention, due to their numerous biotechnological applications, spanning carotenoid production, liquid bioremediation, heavy metal biotransformation and antifungal and plant growth-promoting actions, but also for their role as opportunistic pathogens. Nevertheless, their characterization at the 'omic' level is still scarce. Here, we applied different proteomic workflows to R. mucilaginosa with the aim of assessing their potential in generating information on proteins and functions of biotechnological interest, with a particular focus on the carotenogenic pathway. After optimization of protein extraction, we tested several gel-based (including 2D-DIGE) and gel-free sample preparation techniques, followed by tandem mass spectrometry analysis. Contextually, we evaluated different bioinformatic strategies for protein identification and interpretation of the biological significance of the dataset. When 2D-DIGE analysis was applied, not all spots returned a unambiguous identification and no carotenogenic enzymes were identified, even upon the application of different database search strategies. Then, the application of shotgun proteomic workflows with varying levels of sensitivity provided a picture of the information depth that can be reached with different analytical resources, and resulted in a plethora of information on R. mucilaginosa metabolism. However, also in these cases no proteins related to the carotenogenic pathway were identified, thus indicating that further improvements in sequence databases and functional annotations are strictly needed for increasing the outcome of proteomic analysis of this and other non-conventional yeasts. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Solving Relational Database Problems with ORDBMS in an Advanced Database Course
ERIC Educational Resources Information Center
Wang, Ming
2011-01-01
This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…
Borg, Johan; Lindström, Anna; Larsson, Stig
2011-03-01
The 'Convention on the Rights of Persons with Disabilities' (CRPD) requires governments to meet the assistive technology needs of citizens. However, the access to assistive technology in developing countries is severely limited, which is aggravated by a lack of related services. To summarize current knowledge on assistive technology for low- and lower-middle-income countries published in 1995 or later, and to provide recommendations that facilitate implementation of the CRPD. Literature review. Literature was searched in web-based databases and reference lists. Studies carried out in low- and lower-middle-income countries, or addressing assistive technology for such countries, were included. The 52 included articles are dominated by product oriented research on leg prostheses and manual wheelchairs. Less has been published on hearing aids and virtually nothing on the broad range of other types of assistive technology. To support effective implementation of the CRPD in these countries, there is a need for actions and research related particularly to policies, service delivery, outcomes and international cooperation, but also to product development and production. The article has a potential to contribute to CRPD compliant developments in the provision of assistive technology in developing countries by providing practitioners with an overview of published knowledge and researchers with identified research needs.
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Pandarinath, Kailasa; Amezcua-Valdez, Alejandra; Rosales-Rivera, Mauricio; Verma, Sanjeet K.; Quiroz-Ruiz, Alfredo; Armstrong-Altrin, John S.
2017-05-01
A new multidimensional scheme consistent with the International Union of Geological Sciences (IUGS) is proposed for the classification of igneous rocks in terms of four magma types: ultrabasic, basic, intermediate, and acid. Our procedure is based on an extensive database of major element composition of a total of 33,868 relatively fresh rock samples having a multinormal distribution (initial database with 37,215 samples). Multinormally distributed database in terms of log-ratios of samples was ascertained by a new computer program DOMuDaF, in which the discordancy test was applied at the 99.9% confidence level. Isometric log-ratio (ilr) transformation was used to provide overall percent correct classification of 88.7%, 75.8%, 88.0%, and 80.9% for ultrabasic, basic, intermediate, and acid rocks, respectively. Given the known mathematical and uncertainty propagation properties, this transformation could be adopted for routine applications. The incorrect classification was mainly for the "neighbour" magma types, e.g., basic for ultrabasic and vice versa. Some of these misclassifications do not have any effect on multidimensional tectonic discrimination. For an efficient application of this multidimensional scheme, a new computer program MagClaMSys_ilr (MagClaMSys-Magma Classification Major-element based System) was written, which is available for on-line processing on http://tlaloc.ier.unam.mx/index.html. This classification scheme was tested from newly compiled data for relatively fresh Neogene igneous rocks and was found to be consistent with the conventional IUGS procedure. The new scheme was successfully applied to inter-laboratory data for three geochemical reference materials (basalts JB-1 and JB-1a, and andesite JA-3) from Japan and showed that the inferred magma types are consistent with the rock name (basic for basalts JB-1 and JB-1a and intermediate for andesite JA-3). The scheme was also successfully applied to five case studies of older Archaean to Mesozoic igneous rocks. Similar or more reliable results were obtained from existing tectonomagmatic discrimination diagrams when used in conjunction with the new computer program as compared to the IUGS scheme. The application to three case studies of igneous provenance of sedimentary rocks was demonstrated as a novel approach. Finally, we show that the new scheme is more robust for post-emplacement compositional changes than the conventional IUGS procedure.
Matityahu, Amir; Kahler, David; Krettek, Christian; Stöckle, Ulrich; Grutzner, Paul Alfred; Messmer, Peter; Ljungqvist, Jan; Gebhard, Florian
2014-12-01
To evaluate the accuracy of computer-assisted sacral screw fixation compared with conventional techniques in the dysmorphic versus normal sacrum. Review of a previous study database. Database of a multinational study with 9 participating trauma centers. The reviewed group included 130 patients, 72 from the navigated group and 58 from the conventional group. Of these, 109 were in the nondysmorphic group and 21 in the dysmorphic group. Placement of sacroiliac (SI) screws was performed using standard fluoroscopy for the conventional group and BrainLAB navigation software with either 2-dimensional or 3-dimensional (3D) navigation for the navigated group. Accuracy of SI screw placement by 2-dimensional and 3D navigation versus conventional fluoroscopy in dysmorphic and nondysmorphic patients, as evaluated by 6 observers using postoperative computerized tomography imaging at least 1 year after initial surgery. Intraobserver agreement was also evaluated. There were 11.9% (13/109) of patients with misplaced screws in the nondysmorphic group and 28.6% (6/21) of patients with misplaced screws in the dysmorphic group, none of which were in the 3D navigation group. Raw agreement between the 6 observers regarding misplaced screws was 32%. However, the percent overall agreement was 69.0% (kappa = 0.38, P < 0.05). The use of 3D navigation to improve intraoperative imaging for accurate insertion of SI screws is magnified in the dysmorphic proximal sacral segment. We recommend the use of 3D navigation, where available, for insertion of SI screws in patients with normal and dysmorphic proximal sacral segments. Therapeutic level I.
NASA Astrophysics Data System (ADS)
Seufert, V.; Wood, S.; Reid, A.; Gonzalez, A.; Rhemtulla, J.; Ramankutty, N.
2014-12-01
The most important current driver of biodiversity loss is the conversion of natural habitats for human land uses, mostly for the purpose of food production. However, by causing this biodiversity loss, food production is eroding the very same ecosystem services (e.g. pollination and soil fertility) that it depends on. We therefore need to adopt more wildlife-friendly agricultural practices that can contribute to preserving biodiversity. Organic farming has been shown to typically host higher biodiversity than conventional farming. But how is the biodiversity benefit of organic management dependent on the landscape context farms are situated in? To implement organic farming as an effective means for protecting biodiversity and enhancing ecosystem services we need to understand better under what conditions organic management is most beneficial for species. We conducted a meta-analysis of the literature to answer this question, compiling the most comprehensive database to date of studies that monitored biodiversity in organic vs. conventional fields. We also collected information about the landscape surrounding these fields from remote sensing products. Our database consists of 348 study sites across North America and Europe. Our analysis shows that organic management can improve biodiversity in agricultural fields substantially. It is especially effective at preserving biodiversity in homogeneous landscapes that are structurally simplified and dominated by either cropland or pasture. In heterogeneous landscapes conventional agriculture might instead already hold high biodiversity, and organic management does not appear to provide as much of a benefit for species richness as in simplified landscapes. Our results suggest that strategies to maintain biodiversity-dependent ecosystem services should include a combination of pristine natural habitats, wildlife-friendly farming systems like organic farming, and high-yielding conventional systems, interspersed in structurally diverse, heterogeneous landscapes.
Space debris mitigation - engineering strategies
NASA Astrophysics Data System (ADS)
Taylor, E.; Hammond, M.
The problem of space debris pollution is acknowledged to be of growing concern by space agencies, leading to recent activities in the field of space debris mitigation. A review of the current (and near-future) mitigation guidelines, handbooks, standards and licensing procedures has identified a number of areas where further work is required. In order for space debris mitigation to be implemented in spacecraft manufacture and operation, the authors suggest that debris-related criteria need to become design parameters (following the same process as applied to reliability and radiation). To meet these parameters, spacecraft manufacturers and operators will need processes (supported by design tools and databases and implementation standards). A particular aspect of debris mitigation, as compared with conventional requirements (e.g. radiation and reliability) is the current and near-future national and international regulatory framework and associated liability aspects. A framework for these implementation standards is presented, in addition to results of in-house research and development on design tools and databases (including collision avoidance in GTO and SSTO and evaluation of failure criteria on composite and aluminium structures).
Search strategies on the Internet: general and specific.
Bottrill, Krys
2004-06-01
Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.
In-house access to PACS images and related data through World Wide Web
NASA Astrophysics Data System (ADS)
Mascarini, Christian; Ratib, Osman M.; Trayser, Gerhard; Ligier, Yves; Appel, R. D.
1996-05-01
The development of a hospital wide PACS is in progress at the University Hospital of Geneva and several archive modules are operational since 1992. This PACS is intended for wide distribution of images to clinical wards. As the PACS project and the number of archived images grow rapidly in the hospital, it was necessary to provide an easy, more widely accessible and convenient access to the PACS database for the clinicians in the different wards and clinical units of the hospital. An innovative solution has been developed using tools such as Netscape navigator and NCSA World Wide Web server as an alternative to conventional database query and retrieval software. These tools present the advantages of providing an user interface which is the same independently of the platform being used (Mac, Windows, UNIX, ...), and an easy integration of different types of documents (text, images, ...). A strict access control has been added to this interface. It allows user identification and access rights checking, as defined by the in-house hospital information system, before allowing the navigation through patient data records.
Konishi, Takahiro; Nakajima, Kenichi; Okuda, Koichi; Yoneyama, Hiroto; Matsuo, Shinro; Shibutani, Takayuki; Onoguchi, Masahisa; Kinuya, Seigo
2017-07-01
Although IQ-single-photon emission computed tomography (SPECT) provides rapid acquisition and attenuation-corrected images, the unique technology may create characteristic distribution different from the conventional imaging. This study aimed to compare the diagnostic performance of IQ-SPECT using Japanese normal databases (NDBs) with that of the conventional SPECT for thallium-201 ( 201 Tl) myocardial perfusion imaging (MPI). A total of 36 patients underwent 1-day 201 Tl adenosine stress-rest MPI. Images were acquired with IQ-SPECT at approximately one-quarter of the standard time of conventional SPECT. Projection data acquired with the IQ-SPECT system were reconstructed via an ordered subset conjugate gradient minimizer method with or without scatter and attenuation correction (SCAC). Projection data obtained using the conventional SPECT were reconstructed via a filtered back projection method without SCAC. The summed stress score (SSS) was calculated using NDBs created by the Japanese Society of Nuclear Medicine working group, and scores were compared between IQ-SPECT and conventional SPECT using the acquisition condition-matched NDBs. The diagnostic performance of the methods for the detection of coronary artery disease was also compared. SSSs were 6.6 ± 8.2 for the conventional SPECT, 6.6 ± 9.4 for IQ-SPECT without SCAC, and 6.5 ± 9.7 for IQ-SPECT with SCAC (p = n.s. for each comparison). The SSS showed a strong positive correlation between conventional SPECT and IQ-SPECT (r = 0.921 and p < 0.0001), and the correlation between IQ-SPECT with and without SCAC was also good (r = 0.907 and p < 0.0001). Regarding diagnostic performance, the sensitivity, specificity, and accuracy were 80.8, 78.9, and 79.4%, respectively, for the conventional SPECT; 80.8, 80.3, and 82.0%, respectively, for IQ-SPECT without SCAC; and 88.5, 86.8, and 87.3%, respectively, for IQ-SPECT with SCAC, respectively. The area under the curve obtained via receiver operating characteristic analysis were 0.77, 0.80, and 0.86 for conventional SPECT, IQ-SPECT without SCAC, and IQ-SPECT with SCAC, respectively (p = n.s. for each comparison). When appropriate NDBs were used, the diagnostic performance of 201 Tl IQ-SPECT was comparable with that of the conventional system regardless of different characteristics of myocardial accumulation in the conventional system.
Identifying work-related motor vehicle crashes in multiple databases.
Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J
2012-01-01
To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.
Ultrasound and thyroiditis in patient candidates for thyroidectomy.
Del Rio, P; De Simone, B; Fumagalli, M; Viani, L; Totaro, A; Sianesi, M
2015-03-01
Thyroiditis is often associated with nodules based on the Bethesda classification system, and the presence of thyroiditis can make thyroid surgery difficult using both conventional techniques and minimally invasive videoassisted approaches (MIVAT). We analyzed 326 patients who underwent total thyroidectomy in 2012. We collected all data in dedicated database. The patients were divided in 4 groups: group 1 no affected by thyroiditis, group 2 affected by thyroiditis, group 3 only histological diagnosis of thyroiditis, group 4all patients affected by thyroiditis. Group 1 included 201 cases, group 2 included 64 patients, group 3 included 61 patients. No statistically significant difference between group 2 and 3 about Ultrasound (US) examination. Statistically significant difference in incidence of "THYR 3-4" between group 1 and group 4. No differences in MIVAT vs. Conventional group. US examination of the thyroid is essential for the diagnostic study of the gland also in the selection of a surgical approach. Thyroiditis is a relative contraindication to MIVAT but the experience of the endocrine surgeon is the most important factor to reduce intra and postoperative complications together a correct collaboration in multidisciplinart endocrinological team.
Performance assessment of EMR systems based on post-relational database.
Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji
2012-08-01
Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
NASA Astrophysics Data System (ADS)
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Moran, John L; Solomon, Patricia J
2011-02-01
Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.
Analysis of large system black box verification test data
NASA Technical Reports Server (NTRS)
Clapp, Kenneth C.; Iyer, Ravishankar Krishnan
1993-01-01
Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.
Hurst, Dominic
2012-06-01
The Medline, Cochrane CENTRAL, Biomed Central, Database of Open Access Journals (DOAJ), OpenJ-Gate, Bibliografia Brasileira de Odontologia (BBO), LILACS, IndMed, Sabinet, Scielo, Scirus (Medicine), OpenSIGLE and Google Scholar databases were searched. Hand searching was performed for journals not indexed in the databases. References of included trials were checked. Prospective clinical trials with test and control groups with a follow up of at least one year were included. Data abstraction was conducted independently and clinical and methodologically homogeneous data were pooled using a fixed-effects model. Eighteen trials were included. From these 32 individual dichotomous datasets were extracted and analysed. The majority of the results show no differences between both types of intervention. A high risk of selection-, performance-, detection- and attrition bias was identified. Existing research gaps are mainly due to lack of trials and small sample size. The current evidence indicates that the failure rate of high-viscosity GIC/ART restorations is not higher than, but similar to that of conventional amalgam fillings after periods longer than one year. These results are in line with the conclusions drawn during the original systematic review. There is a high risk that these results are affected by bias, and thus confirmation by further trials with suitably high numbers of participants is needed.
Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela
2017-10-17
To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.
Ellison, Jonathan S; Montgomery, Jeffrey S; Wolf, J Stuart; Hafez, Khaled S; Miller, David C; Weizer, Alon Z
2012-07-01
Minimally invasive nephron sparing surgery is gaining popularity for small renal masses. Few groups have evaluated robot-assisted partial nephrectomy compared to other approaches using comparable patient populations. We present a matched pair analysis of a heterogeneous group of surgeons who performed robot-assisted partial nephrectomy and a single experienced laparoscopic surgeon who performed conventional laparoscopic partial nephrectomy. Perioperative outcomes and complications were compared. All 249 conventional laparoscopic and robot-assisted partial nephrectomy cases from January 2007 to June 2010 were reviewed from our prospectively maintained institutional database. Groups were matched 1:1 (108 matched pairs) by R.E.N.A.L. (radius, exophytic/endophytic properties, nearness of tumor to collecting system or sinus, anterior/posterior, location relative to polar lines) nephrometry score, transperitoneal vs retroperitoneal approach, patient age and hilar nature of the tumor. Statistical analysis was done to compare operative outcomes and complications. Matched analysis revealed that nephrometry score, age, gender, tumor side and American Society of Anesthesia physical status classification were similar. Operative time favored conventional laparoscopic partial nephrectomy. During the study period robot-assisted partial nephrectomy showed significant improvements in estimated blood loss and warm ischemia time compared to those of the experienced conventional laparoscopic group. Postoperative complication rates, and complication distributions by Clavien classification and type were similar for conventional laparoscopic and robot-assisted partial nephrectomy (41.7% and 35.0%, respectively). Robot-assisted partial nephrectomy has a noticeable but rapid learning curve. After it is overcome the robotic procedure results in perioperative outcomes similar to those achieved with conventional laparoscopic partial nephrectomy done by an experienced surgeon. Robot-assisted partial nephrectomy likely improves surgeon and patient accessibility to minimally invasive nephron sparing surgery. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Kane, Christopher
2013-02-01
Minimally invasive nephron sparing surgery is gaining popularity for small renal masses. Few groups have evaluated robot-assisted partial nephrectomy compared to other approaches using comparable patient populations. We present a matched pair analysis of a heterogeneous group of surgeons who performed robot-assisted partial nephrectomy and a single experienced laparoscopic surgeon who performed conventional laparoscopic partial nephrectomy. Perioperative outcomes and complications were compared. All 249 conventional laparoscopic and robot-assisted partial nephrectomy cases from January 2007 to June 2010 were reviewed from our prospectively maintained institutional database. Groups were matched 1:1 (108 matched pairs) by R.E.N.A.L. (radius, exophytic/endophytic properties, nearness of tumor to collecting system or sinus, anterior/posterior, location relative to polar lines) nephrometry score, transperitoneal vs retroperitoneal approach, patient age and hilar nature of the tumor. Statistical analysis was done to compare operative outcomes and complications. Matched analysis revealed that nephrometry score, age, gender, tumor side and American Society of Anesthesia physical status classification were similar. Operative time favored conventional laparoscopic partial nephrectomy. During the study period robot-assisted partial nephrectomy showed significant improvements in estimated blood loss and warm ischemia time compared to those of the experienced conventional laparoscopic group. Postoperative complication rates, and complication distributions by Clavien classification and type were similar for conventional laparoscopic and robot-assisted partial nephrectomy (41.7% and 35.0%, respectively). Robot-assisted partial nephrectomy has a noticeable but rapid learning curve. After it is overcome the robotic procedure results in perioperative outcomes similar to those achieved with conventional laparoscopic partial nephrectomy done by an experienced surgeon. Robot-assisted partial nephrectomy likely improves surgeon and patient accessibility to minimally invasive nephron sparing surgery. Copyright © 2013 Elsevier Inc. All rights reserved.
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
FIELD PERFORMANCE OF WOODBURNING STOVES IN CRESTED BUTTE, COLORADO
The paper discusses field emissions from woodstoves measured in Crested Butte, Colorado, during the winters of 1988-89 and 1989-90. Both particulate matter and carbon monoxide emissions were measured. The database from this work is large, including conventional stoves and EPA-cer...
Submission of nucleotide sequence eimeria acervulina profilin to genbank database
USDA-ARS?s Scientific Manuscript database
Poultry coccidiosis, caused by intestinal protozoa Eimeria, is a severe problem for the poultry industry, leading to a substantial economic burden of over three billion dollars worldwide. Conventional vaccines including live vaccines and attenuated vaccines could cause mild to severe reactions Numer...
Smartphone-Based VOC Sensor Using Colorimetric Polydiacetylenes.
Park, Dong-Hoon; Heo, Jung-Moo; Jeong, Woomin; Yoo, Young Hyuk; Park, Bum Jun; Kim, Jong-Man
2018-02-07
Owing to a unique colorimetric (typically blue-to-red) feature upon environmental stimulation, polydiacetylenes (PDAs) have been actively employed in chemosensor systems. We developed a highly accurate and simple volatile organic compound (VOC) sensor system that can be operated using a conventional smartphone. The procedure begins with forming an array of four different PDAs on conventional paper using inkjet printing of four corresponding diacetylenes followed by photopolymerization. A database of color changes (i.e., red and hue values) is then constructed on the basis of different solvatochromic responses of the 4 PDAs to 11 organic solvents. Exposure of the PDA array to an unknown solvent promotes color changes, which are imaged using a smartphone camera and analyzed using the app. A comparison of the color changes to the database promoted by the 11 solvents enables the smartphone app to identify the unknown solvent with 100% accuracy. Additionally, it was demonstrated that the PDA array sensor was sufficiently sensitive to accurately detect the 11 VOC gases.
Method and system for data clustering for very large databases
NASA Technical Reports Server (NTRS)
Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)
1998-01-01
Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.
Autonomous vehicle motion control, approximate maps, and fuzzy logic
NASA Technical Reports Server (NTRS)
Ruspini, Enrique H.
1993-01-01
Progress on research on the control of actions of autonomous mobile agents using fuzzy logic is presented. The innovations described encompass theoretical and applied developments. At the theoretical level, results of research leading to the combined utilization of conventional artificial planning techniques with fuzzy logic approaches for the control of local motion and perception actions are presented. Also formulations of dynamic programming approaches to optimal control in the context of the analysis of approximate models of the real world are examined. Also a new approach to goal conflict resolution that does not require specification of numerical values representing relative goal importance is reviewed. Applied developments include the introduction of the notion of approximate map. A fuzzy relational database structure for the representation of vague and imprecise information about the robot's environment is proposed. Also the central notions of control point and control structure are discussed.
Rahi, Praveen; Prakash, Om; Shouche, Yogesh S.
2016-01-01
Matrix-assisted laser desorption/ionization time-of-flight mass-spectrometry (MALDI-TOF MS) based biotyping is an emerging technique for high-throughput and rapid microbial identification. Due to its relatively higher accuracy, comprehensive database of clinically important microorganisms and low-cost compared to other microbial identification methods, MALDI-TOF MS has started replacing existing practices prevalent in clinical diagnosis. However, applicability of MALDI-TOF MS in the area of microbial ecology research is still limited mainly due to the lack of data on non-clinical microorganisms. Intense research activities on cultivation of microbial diversity by conventional as well as by innovative and high-throughput methods has substantially increased the number of microbial species known today. This important area of research is in urgent need of rapid and reliable method(s) for characterization and de-replication of microorganisms from various ecosystems. MALDI-TOF MS based characterization, in our opinion, appears to be the most suitable technique for such studies. Reliability of MALDI-TOF MS based identification method depends mainly on accuracy and width of reference databases, which need continuous expansion and improvement. In this review, we propose a common strategy to generate MALDI-TOF MS spectral database and advocated its sharing, and also discuss the role of MALDI-TOF MS based high-throughput microbial identification in microbial ecology studies. PMID:27625644
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
NASA Astrophysics Data System (ADS)
Wolfgramm, Bettina; Hurni, Hans; Liniger, Hanspeter; Ruppen, Sebastian; Milne, Eleanor; Bader, Hans-Peter; Scheidegger, Ruth; Amare, Tadele; Yitaferu, Birru; Nazarmavloev, Farrukh; Conder, Malgorzata; Ebneter, Laura; Qadamov, Aslam; Shokirov, Qobiljon; Hergarten, Christian; Schwilch, Gudrun
2013-04-01
There is a fundamental mutual interest between enhancing soil organic carbon (SOC) in the world's soils and the objectives of the major global environmental conventions (UNFCCC, UNCBD, UNCCD). While there is evidence at the case study level that sustainable land management (SLM) technologies increase SOC stocks and SOC related benefits, there is no quantitative data available on the potential for increasing SOC benefits from different SLM technologies and especially from case studies in the developing countries, and a clear understanding of the trade-offs related to SLM up-scaling is missing. This study aims at assessing the potential increase of SOC under SLM technologies worldwide, evaluating tradeoffs and gains in up-scaling SLM for case studies in Tajikistan, Ethiopia and Switzerland. It makes use of the SLM technologies documented in the online database of the World Overview of Conservation Approaches and Technologies (WOCAT). The study consists of three components: 1) Identifying SOC benefits contributing to the major global environmental issues for SLM technologies worldwide as documented in the WOCAT global database 2) Validation of SOC storage potentials and SOC benefit predictions for SLM technologies from the WOCAT database using results from existing comparative case studies at the plot level, using soil spectral libraries and standardized documentations of ecosystem service from the WOCAT database. 3) Understanding trade-offs and win-win scenarios of up-scaling SLM technologies from the plot to the household and landscape level using material flow analysis. This study builds on the premise that the most promising way to increase benefits from land management is to consider already existing sustainable strategies. Such SLM technologies from all over the world documented are accessible in a standardized way in the WOCAT online database. The study thus evaluates SLM technologies from the WOCAT database by calculating the potential SOC storage increase and related benefits by comparing SOC estimates before-and-after establishment of the SLM technology. These results are validated using comparative case studies of plots with-and-without SLM technologies (existing SLM systems versus surrounding, degrading systems). In view of upscaling SLM technologies, it is crucial to understand tradeoffs and gains supporting or hindering the further spread. Systemic biomass management analysis using material flow analysis allows quantifying organic carbon flows and storages for different land management options at the household, but also at landscape level. The study shows results relevant for science, policy and practice for accounting, monitoring and evaluating SOC related ecosystem services: - A comprehensive methodology for SLM impact assessments allowing quantification of SOC storage and SOC related benefits under different SLM technologies, and - Improved understanding of upscaling options for SLM technologies and tradeoffs as well as win-win opportunities for biomass management, SOC content increase, and ecosystem services improvement at the plot and household level.
Anorexia: Highlights in Traditional Persian medicine and conventional medicine
Nimrouzi, Majid; Zarshenas, Mohammad Mehdi
2018-01-01
Objective: Anorexia and impaired appetite (Dysorexia) are common symptoms with varying causes, and often need no serious medical intervention. Anorexia nervosa (AN) is a chronic psychiatric disease with a high mortality rate. In Traditional Persian Medicine (TPM), anorexia is a condition in which anorexic patients lose appetite due to dystemperament. This review aims to discuss the common points of traditional and conventional approaches rather than introducing Persian medical recommendations suitable for nowadays use. Materials and Methods: For this purpose, Avicenna's Canon of Medicine, main TPM resources and important databases were reviewed using the related keywords. Results: Despite complex hormonal explanation, etiology of AN in conventional approach is not completely understood. In TPM approach, the etiology and recommended interventions are thoroughly defined based on humoral pathophysiology. In TPM approach, disease states are regarded as the result of imbalances in organs’ temperament and humors. In anorexia with simple dystemperament, the physician should attempt to balance the temperament using foods and medicaments which have opposite quality of temperament. Lifestyle, spiritual diseases (neuro – psychological) and gastrointestinal worms are the other causes for reducing appetite. Also, medicines and foods with warm temperaments (such as Pea soup and Mustard) are useful for these patients (cold temperament). Conclusion: Although the pathophysiology of AN in TPM is different in comparison with conventional views, TPM criteria for treatment this disorder is similar to those of current medicine. Recommending to have spiritual support and a healthy lifestyle are common in both views. Simple safe interventions recommended by TPM may be considered as alternative medical modalities after being confirmed by well-designed clinical trials. PMID:29387569
Zygogiannis, Kostas; Wismeijer, Daniel; Aartman, Irene Ha; Osman, Reham B
2016-01-01
Different treatment protocols in terms of number, diameter, and suprastructure design have been proposed for immediately loaded implants that are used to support mandibular overdentures opposed by maxillary conventional dentures. The aim of this study was to investigate the influence of these protocols on survival rates as well as clinical and prosthodontic outcomes. Several electronic databases were searched for all relevant articles published from 1966 to June 2014. Only randomized controlled trials and prospective studies with a minimum follow-up of 12 months were selected. The primary outcomes of interest were the success and survival rates of the implants. Prosthodontic complications were also evaluated. Fourteen studies fulfilled the inclusion criteria. Of the studies identified, nine were randomized controlled trials and five were prospective studies. The mean follow-up period was 3 years or less for the vast majority of the studies. The reported survival and success rates were comparable to that of conventional loading for most of the included studies. No specific immediate loading protocol seemed to perform better in terms of clinical and prosthodontic outcomes. Immediate loading protocols of mandibular overdentures seem to be a viable alternative to conventional loading. It was not possible to recommend a specific treatment protocol related to the number, diameter of the implants, and attachment system used. Long-term, well-designed studies comparing different immediate loading modalities could help to establish a protocol that delivers the most clinically predictable, efficient, and cost-effective outcome for edentulous patients in need of implant overdentures.
Short Fiction on Film: A Relational DataBase.
ERIC Educational Resources Information Center
May, Charles
Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
Class dependency of fuzzy relational database using relational calculus and conditional probability
NASA Astrophysics Data System (ADS)
Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya
2018-03-01
In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.
Critical evaluation and thermodynamic optimization of the Iron-Rare-Earth systems
NASA Astrophysics Data System (ADS)
Konar, Bikram
Rare-Earth elements by virtue of its typical magnetic, electronic and chemical properties are gaining importance in power, electronics, telecommunications and sustainable green technology related industries. The Magnets from RE-alloys are more powerful than conventional magnets which have more longevity and high temperature workability. The dis-equilibrium in the Rare-Earth element supply and demand has increased the importance of recycling and extraction of REE's from used permanent Magnets. However, lack of the thermodynamic data on RE alloys has made it difficult to design an effective extraction and recycling process. In this regard, Computational Thermodynamic calculations can serve as a cost effective and less time consuming tool to design a waste magnet recycling process. The most common RE permanent magnet is Nd magnet (Nd 2Fe14B). Various elements such as Dy, Tb, Pr, Cu, Co, Ni, etc. are also added to increase its magnetic and mechanical properties. In order to perform reliable thermodynamic calculations for the RE recycling process, accurate thermodynamic database for RE and related alloys are required. The thermodynamic database can be developed using the so-called CALPHAD method. The database development based on the CALPHAD method is essentially the critical evaluation and optimization of all available thermodynamic and phase diagram data. As a results, one set of self-consistent thermodynamic functions for all phases in the given system can be obtained, which can reproduce all reliable thermodynamic and phase diagram data. The database containing the optimized Gibbs energy functions can be used to calculate complex chemical reactions for any high temperature processes. Typically a Gibbs energy minimization routine, such as in FactSage software, can be used to obtain the accurate thermodynamic equilibrium in multicomponent systems. As part of a large thermodynamic database development for permanent magnet recycling and Mg alloy design, all thermodynamic and phase diagram data in the literature for the fourteen Fe-RE binary systems: Fe-La, Fe-Ce, Fe-Pr, Fe-Nd, Fe-Sm, Fe-Gd, Fe-Tb, Fe-Dy, Fe-Ho, Fe-Er, Fe-Tm, Fe-Lu, Fe-Sc and Fe-Y are critically evaluated and optimized to obtain thermodynamic model parameters. The model parameters can be used to calculate phase diagrams and Gibbs energies of all phases as functions of temperature and composition. This database can be incorporated with the present thermodynamic database in FactSage software to perform complex chemical reactions and phase diagram calculations for RE magnet recycling process.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka
2018-05-08
Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on docking calculations with biochemical pathways and enables users to easily and quickly assess PPI feasibilities by archiving PPI predictions. MEGADOCK-Web also promotes the discovery of new PPIs and protein functions and is freely available for use at http://www.bi.cs.titech.ac.jp/megadock-web/ .
Analysis of DIRAC's behavior using model checking with process algebra
NASA Astrophysics Data System (ADS)
Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof
2012-12-01
DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.
A Current Approach to Halitosis and Oral Malodor- A Mini Review.
Bicak, Damla Aksit
2018-01-01
Halitosis, in other words, oral malodor is an important multifactorial health problem affecting the psychological and social life of individuals and is the most common reason for referral to dentists after dental caries and periodontal diseases. The objective of this review was to present and discuss conventional and recently introduced information about the types, causes, detection and treatment methods of halitosis. An expanded literature review was conducted which targeted all articles published in peer-reviewed journals relating to the topic of halitosis. Only articles written in Turkish and English languages were considered. The review itself began with a search of relevant subject headings such as 'halitosis, oral malodor, volatile sulfur compounds in PubMed/Medline, Scopus, Google Scholar and Tubitak Ulakbim databases. A hand search of references was also performed. When search results are combined, the total number of relevant literature was found to be 4646 abstracts and 978 full-text articles. Abstracts, editorial letters were not included and about half of full-text articles were not related to dental practice. Among the remaining 124 full-text articles, duplicated articles and articles written other than Turkish and English languages were removed and 54 full-text articles were used for this review. According to the reviewed articles, both conventional and new methods were introduced in the management of halitosis. However, conventional methods seem to be more effective and widely used in the diagnosis and treatment of halitosis. As being first line professionals, dentists must analyze and treat oral problems which may be responsible for the patient's malodor, and should inform the patient about halitosis causes and oral hygiene procedures (tooth flossing, tongue cleaning, appropriate mouthwash and toothpaste selection and use) and if the problem persists, they should consult to a medical specialist.
Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington
Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.
2008-01-01
A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.
Matching shapes with self-intersections: application to leaf classification.
Mokhtarian, Farzin; Abbasi, Sadegh
2004-05-01
We address the problem of two-dimensional (2-D) shape representation and matching in presence of self-intersection for large image databases. This may occur when part of an object is hidden behind another part and results in a darker section in the gray level image of the object. The boundary contour of the object must include the boundary of this part which is entirely inside the outline of the object. The Curvature Scale Space (CSS) image of a shape is a multiscale organization of its inflection points as it is smoothed. The CSS-based shape representation method has been selected for MPEG-7 standardization. We study the effects of contour self-intersection on the Curvature Scale Space image. When there is no self-intersection, the CSS image contains several arch shape contours, each related to a concavity or a convexity of the shape. Self intersections create contours with minima as well as maxima in the CSS image. An efficient shape representation method has been introduced in this paper which describes a shape using the maxima as well as the minima of its CSS contours. This is a natural generalization of the conventional method which only includes the maxima of the CSS image contours. The conventional matching algorithm has also been modified to accommodate the new information about the minima. The method has been successfully used in a real world application to find, for an unknown leaf, similar classes from a database of classified leaf images representing different varieties of chrysanthemum. For many classes of leaves, self-intersection is inevitable during the scanning of the image. Therefore the original contributions of this paper is the generalization of the Curvature Scale Space representation to the class of 2-D contours with self-intersection, and its application to the classification of Chrysanthemum leaves.
Migration from relational to NoSQL database
NASA Astrophysics Data System (ADS)
Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar
2017-11-01
Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.
Teaching About the Constitution.
ERIC Educational Resources Information Center
White, Charles S.
1988-01-01
Reviews "The U.S. Constitution Then and Now," a two-unit program using the integrated database and word processing capabilities of AppleWorks. For grades 7-12, the units simulate the constitutional convention and the principles of free speech and privacy. Concludes that with adequate time, the program can provide a potentially powerful…
Automating Relational Database Design for Microcomputer Users.
ERIC Educational Resources Information Center
Pu, Hao-Che
1991-01-01
Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…
A probabilistic approach to information retrieval in heterogeneous databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, A.; Segev, A.
During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed.more » Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.« less
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
A proposal of fuzzy connective with learning function and its application to fuzzy retrieval system
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Naito, Eiichi; Ozawa, Jun; Wakami, Noboru
1993-01-01
A new fuzzy connective and a structure of network constructed by fuzzy connectives are proposed to overcome a drawback of conventional fuzzy retrieval systems. This network represents a retrieval query and the fuzzy connectives in networks have a learning function to adjust its parameters by data from a database and outputs of a user. The fuzzy retrieval systems employing this network are also constructed. Users can retrieve results even with a query whose attributes do not exist in a database schema and can get satisfactory results for variety of thinkings by learning function.
Significance of White-Coat Hypertension in Older Persons With Isolated Systolic Hypertension
Franklin, Stanley S.; Thijs, Lutgarde; Hansen, Tine W.; Li, Yan; Boggia, José; Kikuya, Masahiro; Björklund-Bodegård, Kristina; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Kuznetsova, Tatiana; Stolarz-Skrzypek, Katarzyna; Tikhonoff, Valérie; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Sandoya, Edgardo; Kawecka-Jaszcz, Kalina; Imai, Yutaka; Wang, Jiguang; Ibsen, Hans; O’Brien, Eoin; Staessen, Jan A.
2013-01-01
The significance of white-coat hypertension in older persons with isolated systolic hypertension remains poorly understood. We analyzed subjects from the population-based 11-country International Database on Ambulatory Blood Pressure Monitoring in Relation to Cardiovascular Outcomes database who had daytime ambulatory blood pressure (BP; ABP) and conventional BP (CBP) measurements. After excluding persons with diastolic hypertension by CBP (≥90 mm Hg) or by daytime ABP (≥85 mm Hg), a history of cardiovascular disease, and persons <18 years of age, the present analysis totaled 7295 persons, of whom 1593 had isolated systolic hypertension. During a median follow-up of 10.6 years, there was a total of 655 fatal and nonfatal cardiovascular events. The analyses were stratified by treatment status. In untreated subjects, those with white-coat hypertension (CBP ≥140/<90 mm Hg and ABP <135/<85 mm Hg) and subjects with normal BP (CBP <140/<90 mm Hg and ABP <135/<85 mm Hg) were at similar risk (adjusted hazard rate: 1.17 [95% CI: 0.87–1.57]; P=0.29). Furthermore, in treated subjects with isolated systolic hypertension, the cardiovascular risk was similar in elevated conventional and normal daytime systolic BP as compared with those with normal conventional and normal daytime BPs (adjusted hazard rate: 1.10 [95% CI: 0.79–1.53]; P=0.57). However, both treated isolated systolic hypertension subjects with white-coat hypertension (adjusted hazard rate: 2.00; [95% CI: 1.43–2.79]; P<0.0001) and treated subjects with normal BP (adjusted hazard rate: 1.98 [95% CI: 1.49–2.62]; P<0.0001) were at higher risk as compared with untreated normotensive subjects. In conclusion, subjects with sustained hypertension who have their ABP normalized on antihypertensive therapy but with residual white-coat effect by CBP measurement have an entity that we have termed, “treated normalized hypertension.” Therefore, one should be cautious in applying the term “white-coat hypertension” to persons receiving antihypertensive treatment. PMID:22252396
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
Tang, Judith A Lam; Rieger, Jana M; Wolfaardt, Johan F
2008-01-01
This review examined literature that reported functional outcomes across 3 categories of prosthetic treatment after microvascular reconstruction of the maxilla and mandible: (1) conventional dental/tissue-supported prosthesis, (2) implant-retained prosthesis, and (3) no prosthesis. Library databases were searched for articles related to reconstruction of the maxilla and mandible, and references of selected articles were hand searched. Relevant literature was identified and reviewed with criteria specified a priori. Forty-nine articles met the inclusion criteria. Twelve articles reported on function after maxillary reconstruction, with the majority of articles reporting on outcomes for 1 to 6 subjects. Thirty-nine articles reported on function after mandibular reconstruction. Speech outcomes were satisfactory across all groups. Swallowing reports indicated that many patients who received either type of prosthetic rehabilitation resumed a normal diet, whereas those without prosthetic rehabilitation were often restricted to liquid diets or feeding tubes. Patients without prosthetic rehabilitation reportedly had poor masticatory ability, whereas conventional prosthetic treatment allowed some recovery of mastication and implant-retained prosthetic treatment resulted in the most favorable masticatory outcomes. Quality-of-life outcomes were similar across all patients. Several limitations of the current literature prevented definitive conclusions from being reached within this review, especially regarding maxillary reconstruction. However, recognition of these limitations can direct functional assessment for the future.
Basin centered gas systems of the U.S.
Popov, Marin A.; Nuccio, Vito F.; Dyman, Thaddeus S.; Gognat, Timothy A.; Johnson, Ronald C.; Schmoker, James W.; Wilson, Michael S.; Bartberger, Charles E.
2001-01-01
Basin-center accumulations, a type of continuous accumulation, have spatial dimensions equal to or exceeding those of conventional oil and gas accumulations, but unlike conventional fields, cannot be represented in terms of discrete, countable units delineated by downdip hydrocarbon-water contacts. Common geologic and production characteristics of continuous accumulations include their occurrence downdip from water-saturated rocks, lack of traditional trap or seal, relatively low matrix permeability, abnormal pressures (high or low), local interbedded source rocks, large in-place hydrocarbon volumes, and low recovery factors. The U.S. Geological Survey, in cooperation with the U.S. Department of Energy, National Energy Technology Laboratory, Morgantown, West Virginia, is currently re-evaluating the resource potential of basin-center gas accumulations in the U.S. in light of changing geologic perceptions about these accumulations (such as the role of subtle structures to produce sweet spots), and the availability of new data. Better geologic understanding of basin-center gas accumulations could result in new plays or revised plays relative to those of the U.S. Geological Survey 1995 National Assessment (Gautier and others, 1995). For this study, 33 potential basin-center gas accumulations throughout the U.S. were identified and characterized based on data from the published literature and from well and reservoir databases (Figure 1). However, well-known or established basin-center accumulations such as the Green River Basin, the Uinta Basin, and the Piceance Basin are not addressed in this study.
De Croon, Einar M; Sluiter, Judith K; Kuijer, P Paul F M; Frings-Dresen, Monique H W
2005-02-01
Conventional and innovative office concepts can be described according to three dimensions: (1) the office location (e.g. telework office versus conventional office); (2) the office lay-out (e.g. open lay-out versus cellular office); and (3) the office use (e.g. fixed versus shared workplaces). This review examined how these three office dimensions affect the office worker's job demands, job resources, short- and long-term reactions. Using search terms related to the office concept (dimensions), a systematic literature search starting from 1972 was conducted in seven databases. Subsequently, based on the quality of the studies and the consistency of the findings, the level of evidence for the observed findings was assessed. Out of 1091 hits 49 relevant studies were identified. Results provide strong evidence that working in open workplaces reduces privacy and job satisfaction. Limited evidence is available that working in open workplaces intensifies cognitive workload and worsens interpersonal relations; close distance between workstations intensifies cognitive workload and reduces privacy; and desk-sharing improves communication. Due to a lack of studies no evidence was obtained for an effect of the three office dimensions on long-term reactions. The results suggest that ergonomists involved in office innovation could play a meaningful role in safeguarding the worker's job demands, job resources and well-being. Attention should be paid, in particular, to effects of workplace openness by providing acoustic and visual protection.
Enhanced DIII-D Data Management Through a Relational Database
NASA Astrophysics Data System (ADS)
Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.
2000-10-01
A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
NASA Astrophysics Data System (ADS)
Prata, F.; Stebel, K.
2013-12-01
Over the last few years there has been a recognition of the utility of satellite measurements to identify and track volcanic emissions that present a natural hazard to human populations. Mitigation of the volcanic hazard to life and the environment requires understanding of the properties of volcanic emissions, identifying the hazard in near real-time and being able to provide timely and accurate forecasts to affected areas. Amongst the many ways to measure volcanic emissions, satellite remote sensing is capable of providing global quantitative retrievals of important microphysical parameters such as ash mass loading, ash particle effective radius, infrared optical depth, SO2 partial and total column abundance, plume altitude, aerosol optical depth and aerosol absorbing index. The eruption of Eyjafjallajokull in April-May, 2010 led to increased research and measurement programs to better characterize properties of volcanic ash and the need to establish a data-base in which to store and access these data was confirmed. The European Space Agency (ESA) has recognized the importance of having a quality controlled data-base of satellite retrievals and has funded an activity (VAST) to develop novel remote sensing retrieval schemes and a data-base, initially focused on several recent hazardous volcanic eruptions. As a first step, satellite retrievals for the eruptions of Eyjafjallajokull, Grimsvotn, Puyhue-Cordon Caulle, Nabro, Merapi, Okmok, Kasatochi and Sarychev Peak are being considered. Here we describe the data, retrievals and methods being developed for the data-base. Three important applications of the data-base are illustrated related to the ash/aviation problem, to the impact of the Merapi volcanic eruption on the local population, and to estimate SO2 fluxes from active volcanoes-as a means to diagnose future unrest. Dispersion model simulations are also being included in the data-base. In time, data from conventional in situ sampling instruments, airborne and ground-based remote sensing platforms and other meta-data (bulk ash and gas properties, volcanic setting, volcanic eruption chronologies, hazards and impacts etc.) will be added. The data-base has the potential to provide the natural hazards community with the first dynamic atmospheric volcanic hazards map and will be a valuable tool particularly for global transport.
Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System
Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail
1988-01-01
This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.
An Algorithm of Association Rule Mining for Microbial Energy Prospection
Shaheen, Muhammad; Shahbaz, Muhammad
2017-01-01
The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846
Local intensity area descriptor for facial recognition in ideal and noise conditions
NASA Astrophysics Data System (ADS)
Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu
2017-03-01
We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.
Therapeutic efficacy of self-ligating brackets: A systematic review.
Dehbi, Hasnaa; Azaroual, Mohamed Faouzi; Zaoui, Fatima; Halimi, Abdelali; Benyahia, Hicham
2017-09-01
Over the last few years, the use of self-ligating brackets in orthodontics has progressed considerably. These systems have been the subject of numerous studies with good levels of evidence making it possible to evaluate their efficacy and efficiency compared to conventional brackets. The aim of this study was to evaluate the therapeutic efficacy of self-ligating brackets by means of a systematic review of the scientific literature. A systematic study was undertaken in the form of a recent search of the electronic Pubmed database, oriented by the use of several keywords combined by Boolean operators relating to the therapeutic efficacy of self-ligating brackets through the study of tooth alignment, space closure, expansion, treatment duration and degree of discomfort. The search was limited to randomized controlled studies, and two independent readers identified studies corresponding to the selection criteria. The chosen articles comprised 20 randomized controlled trials. The studies analyzed revealed the absence of significant differences between the two types of system on the basis of the clinical criteria adopted, thereby refuting the hypothesis of the superiority of self-ligating brackets over conventional systems. Copyright © 2017 CEO. Published by Elsevier Masson SAS. All rights reserved.
Lohse, Keith R.; Hilderman, Courtney G. E.; Cheung, Katharine L.; Tatla, Sandy; Van der Loos, H. F. Machiel
2014-01-01
Background The objective of this analysis was to systematically review the evidence for virtual reality (VR) therapy in an adult post-stroke population in both custom built virtual environments (VE) and commercially available gaming systems (CG). Methods MEDLINE, CINAHL, EMBASE, ERIC, PSYCInfo, DARE, PEDro, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews were systematically searched from the earliest available date until April 4, 2013. Controlled trials that compared VR to conventional therapy were included. Population criteria included adults (>18) post-stroke, excluding children, cerebral palsy, and other neurological disorders. Included studies were reported in English. Quality of studies was assessed with the Physiotherapy Evidence Database Scale (PEDro). Results Twenty-six studies met the inclusion criteria. For body function outcomes, there was a significant benefit of VR therapy compared to conventional therapy controls, G = 0.48, 95% CI = [0.27, 0.70], and no significant difference between VE and CG interventions (P = 0.38). For activity outcomes, there was a significant benefit of VR therapy, G = 0.58, 95% CI = [0.32, 0.85], and no significant difference between VE and CG interventions (P = 0.66). For participation outcomes, the overall effect size was G = 0.56, 95% CI = [0.02, 1.10]. All participation outcomes came from VE studies. Discussion VR rehabilitation moderately improves outcomes compared to conventional therapy in adults post-stroke. Current CG interventions have been too few and too small to assess potential benefits of CG. Future research in this area should aim to clearly define conventional therapy, report on participation measures, consider motivational components of therapy, and investigate commercially available systems in larger RCTs. Trial Registration Prospero CRD42013004338 PMID:24681826
Lohse, Keith R; Hilderman, Courtney G E; Cheung, Katharine L; Tatla, Sandy; Van der Loos, H F Machiel
2014-01-01
The objective of this analysis was to systematically review the evidence for virtual reality (VR) therapy in an adult post-stroke population in both custom built virtual environments (VE) and commercially available gaming systems (CG). MEDLINE, CINAHL, EMBASE, ERIC, PSYCInfo, DARE, PEDro, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews were systematically searched from the earliest available date until April 4, 2013. Controlled trials that compared VR to conventional therapy were included. Population criteria included adults (>18) post-stroke, excluding children, cerebral palsy, and other neurological disorders. Included studies were reported in English. Quality of studies was assessed with the Physiotherapy Evidence Database Scale (PEDro). Twenty-six studies met the inclusion criteria. For body function outcomes, there was a significant benefit of VR therapy compared to conventional therapy controls, G = 0.48, 95% CI = [0.27, 0.70], and no significant difference between VE and CG interventions (P = 0.38). For activity outcomes, there was a significant benefit of VR therapy, G = 0.58, 95% CI = [0.32, 0.85], and no significant difference between VE and CG interventions (P = 0.66). For participation outcomes, the overall effect size was G = 0.56, 95% CI = [0.02, 1.10]. All participation outcomes came from VE studies. VR rehabilitation moderately improves outcomes compared to conventional therapy in adults post-stroke. Current CG interventions have been too few and too small to assess potential benefits of CG. Future research in this area should aim to clearly define conventional therapy, report on participation measures, consider motivational components of therapy, and investigate commercially available systems in larger RCTs. Prospero CRD42013004338.
Zhong, Yunqing; Mao, Bing; Wang, Gang; Fan, Tao; Liu, Xuemei; Diao, Xiang; Fu, Juanjuan
2010-12-01
In China, most patients with acute exacerbations of chronic obstructive pulmonary disease (COPD) are usually treated with Tanreqing injection plus conventional Western medicine. However, the value of its use remains uncertain. The objective of this systematic review is to compare the efficacy of Tanreqing injection plus conventional Western medicine with that of conventional Western medicine alone (therapy A versus therapy B, respectively) in the management of acute exacerbations of COPD. Literature retrieval was conducted using the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE,(®) EMBASE, Chinese Biomedical Database (CBM), and other electronically available databases from respective inception to August 2009. In addition, manual search of some traditional Chinese journals was performed to identify potential studies. Review authors independently extracted the trial data and assessed the quality of each trial. The following outcomes were evaluated: (1) forced expiratory volume in 1 second as a percentage of the predicted value; (2) arterial partial pressure of oxygen (Po(2)); (3) arterial partial pressure of carbon dioxide (Pco(2)); (4) length of hospital stay; (5) marked efficacy rate; (6) interleukin-8; and (7) adverse events. Based on the search strategy, 14 trials involving 954 patients were finally included. Our results showed that compared with therapy B, therapy A improved Po(2), clinical efficacy, and lung function, reduced Pco(2), shortened the length of hospital stay, and was thus more therapeutically beneficial. No serious adverse events were reported. Within the limitations of this systematic review, we can conclude that compared with therapy B, therapy A may provide more benefits for patients with acute exacerbations of COPD. Further large-scale high-quality trials are warranted.
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Williams, Brad J; Ciavarini, Steve J; Devlin, Curt; Cohn, Steven M; Xie, Rong; Vissers, Johannes P C; Martin, LeRoy B; Caswell, Allen; Langridge, James I; Geromanos, Scott J
2016-08-01
In proteomics studies, it is generally accepted that depth of coverage and dynamic range is limited in data-directed acquisitions. The serial nature of the method limits both sensitivity and the number of precursor ions that can be sampled. To that end, a number of data-independent acquisition (DIA) strategies have been introduced with these methods, for the most part, immune to the sampling issue; nevertheless, some do have other limitations with respect to sensitivity. The major limitation with DIA approaches is interference, i.e., MS/MS spectra are highly chimeric and often incapable of being identified using conventional database search engines. Utilizing each available dimension of separation prior to ion detection, we present a new multi-mode acquisition (MMA) strategy multiplexing both narrowband and wideband DIA acquisitions in a single analytical workflow. The iterative nature of the MMA workflow limits the adverse effects of interference with minimal loss in sensitivity. Qualitative identification can be performed by selected ion chromatograms or conventional database search strategies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Simple Deep Learning Method for Neuronal Spike Sorting
NASA Astrophysics Data System (ADS)
Yang, Kai; Wu, Haifeng; Zeng, Yu
2017-10-01
Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.
A practical implementation for a data dictionary in an environment of diverse data sets
Sprenger, Karla K.; Larsen, Dana M.
1993-01-01
The need for a data dictionary database at the U.S. Geological Survey's EROS Data Center (EDC) was reinforced with the Earth Observing System Data and Information System (EOSDIS) requirement for consistent field definitions of data sets residing at more than one archive center. The EDC requirement addresses the existence of multiple sets with identical field definitions using various naming conventions. The EDC is developing a data dictionary database to accomplish the following foals: to standardize field names for ease in software development; to facilitate querying and updating of the date; and to generate ad hoc reports. The structure of the EDC electronic data dictionary database supports different metadata systems as well as many different data sets. A series of reports is used to keep consistency among data sets and various metadata systems.
Peixoto, Sara; Abreu, Pedro
2016-11-01
Clinically isolated syndrome may be the first manifestation of multiple sclerosis, a chronic demyelinating disease of the central nervous system, and it is defined by a single clinical episode suggestive of demyelination. However, patients with this syndrome, even with long term follow up, may not develop new symptoms or demyelinating lesions that fulfils multiple sclerosis diagnostic criteria. We reviewed, in clinically isolated syndrome, what are the best magnetic resonance imaging findings that may predict its conversion to multiple sclerosis. A search was made in the PubMed database for papers published between January 2010 and June 2015 using the following terms: 'clinically isolated syndrome', 'cis', 'multiple sclerosis', 'magnetic resonance imaging', 'magnetic resonance' and 'mri'. In this review, the following conventional magnetic resonance imaging abnormalities found in literature were included: lesion load, lesion location, Barkhof's criteria and brain atrophy related features. The non conventional magnetic resonance imaging techniques studied were double inversion recovery, magnetization transfer imaging, spectroscopy and diffusion tensor imaging. The number and location of demyelinating lesions have a clear role in predicting clinically isolated syndrome conversion to multiple sclerosis. On the other hand, more data are needed to confirm the ability to predict this disease development of non conventional techniques and remaining neuroimaging abnormalities. In forthcoming years, in addition to the established predictive value of the above mentioned neuroimaging abnormalities, different clinically isolated syndrome neuroradiological findings may be considered in multiple sclerosis diagnostic criteria and/or change its treatment recommendations.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
A manufacturing database of advanced materials used in spacecraft structures
NASA Technical Reports Server (NTRS)
Bao, Han P.
1994-01-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved. The most likely application for MCF is in cost modeling for trade studies. On-going work is being pursued to expand the potential applications of MCF.
A manufacturing database of advanced materials used in spacecraft structures
NASA Astrophysics Data System (ADS)
Bao, Han P.
1994-12-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved.
Nagaraj, Shivashankar H; Gasser, Robin B; Nisbet, Alasdair J; Ranganathan, Shoba
2008-01-01
The analysis of expressed sequence tags (EST) offers a rapid and cost effective approach to elucidate the transcriptome of an organism, but requires several computational methods for assembly and annotation. Researchers frequently analyse each step manually, which is laborious and time consuming. We have recently developed ESTExplorer, a semi-automated computational workflow system, in order to achieve the rapid analysis of EST datasets. In this study, we evaluated EST data analysis for the parasitic nematode Trichostrongylus vitrinus (order Strongylida) using ESTExplorer, compared with database matching alone. We functionally annotated 1776 ESTs obtained via suppressive-subtractive hybridisation from T. vitrinus, an important parasitic trichostrongylid of small ruminants. Cluster and comparative genomic analyses of the transcripts using ESTExplorer indicated that 290 (41%) sequences had homologues in Caenorhabditis elegans, 329 (42%) in parasitic nematodes, 202 (28%) in organisms other than nematodes, and 218 (31%) had no significant match to any sequence in the current databases. Of the C. elegans homologues, 90 were associated with 'non-wildtype' double-stranded RNA interference (RNAi) phenotypes, including embryonic lethality, maternal sterility, sterile progeny, larval arrest and slow growth. We could functionally classify 267 (38%) sequences using the Gene Ontologies (GO) and establish pathway associations for 230 (33%) sequences using the Kyoto Encyclopedia of Genes and Genomes (KEGG). Further examination of this EST dataset revealed a number of signalling molecules, proteases, protease inhibitors, enzymes, ion channels and immune-related genes. In addition, we identified 40 putative secreted proteins that could represent potential candidates for developing novel anthelmintics or vaccines. We further compared the automated EST sequence annotations, using ESTExplorer, with database search results for individual T. vitrinus ESTs. ESTExplorer reliably and rapidly annotated 301 ESTs, with pathway and GO information, eliminating 60 low quality hits from database searches. We evaluated the efficacy of ESTExplorer in analysing EST data, and demonstrate that computational tools can be used to accelerate the process of gene discovery in EST sequencing projects. The present study has elucidated sets of relatively conserved and potentially novel genes for biological investigation, and the annotated EST set provides further insight into the molecular biology of T. vitrinus, towards the identification of novel drug targets.
A Relational Database System for Student Use.
ERIC Educational Resources Information Center
Fertuck, Len
1982-01-01
Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
ERIC Educational Resources Information Center
Umeasiegbu, Veronica I.; Bishop, Malachy; Mpofu, Elias
2013-01-01
This article presents an analysis of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) in relation to prior United Nations conventions on disability and U.S. disability policy law with a view to identifying the conventional and also the incremental advances of the CRPD. Previous United Nations conventions related to…
Relational Databases and Biomedical Big Data.
de Silva, N H Nisansa D
2017-01-01
In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.
A Relational Algebra Query Language for Programming Relational Databases
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole
2011-01-01
In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…
Harnett, James; Curtis, Jeffrey R; Gerber, Robert; Gruben, David; Koenig, Andrew
2016-06-01
Tofacitinib is an oral Janus kinase inhibitor indicated for the treatment of rheumatoid arthritis (RA). Tofacitinib can be administered as a monotherapy or in combination with conventional synthetic disease-modifying antirheumatic drugs (DMARDs). This study describes RA patients' characteristics, treatment patterns, and costs for those initiating tofacitinib treatment as monotherapy or combination therapy, using US claims data from clinical practice. A retrospective cohort analysis of patients aged ≥18 years with RA (International Classification of Diseases, Ninth Revision code 714.xx) and with ≥1 tofacitinib claim in the Truven Marketscan (TM) or the Optum Clinformatics (OC) database. Index was defined as the first tofacitinib fill date (November 2012-June 2014). Patients were continuously enrolled for ≥12 months before and after index. Adherence was assessed using the proportion of days covered (PDC) and medication possession ratio (MPR). Persistence was evaluated using a 1.5× days' supply gap or switch. All-cause and RA-related costs in the 12-month pre- and post-index periods were evaluated. Unadjusted and adjusted analyses were conducted on data on treatment patterns and costs stratified by monotherapy status. A total of 337 (TM) and 118 (OC) tofacitinib patients met the selection criteria; 52.2% (TM) and 50.8% (OC) received monotherapy and 83.7% (TM) and 76.3% (OC) had pre-index biologic DMARD experience. Twelve-month mean PDC values were 0.56 (TM) and 0.53 (OC), and 12-month mean MPR was 0.84 (TM) and 0.80 (OC), with persistence of 140.0 (TM) and 124.6 (OC) days. Between 12-month pre- and post-index periods, mean (SD) 12-month RA-related medical costs decreased by $5784 ($31,832) in TM and $6103 ($25,897) in OC (both, P < 0.05), whereas total costs increased by $3996 ($30,397) in TM (P < 0.05) and $1390 ($26,603) in OC. There were no significant differences in adherence, persistence, or all-cause/RA-related costs between monotherapy and combination therapy in unadjusted/adjusted analyses. This analysis adds to the existing tofacitinib knowledge base and will enable informed clinical and policy decision making based on valuable datasets independent of randomized controlled trials. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
A dynamic clinical dental relational database.
Taylor, D; Naguib, R N G; Boulton, S
2004-09-01
The traditional approach to relational database design is based on the logical organization of data into a number of related normalized tables. One assumption is that the nature and structure of the data is known at the design stage. In the case of designing a relational database to store historical dental epidemiological data from individual clinical surveys, the structure of the data is not known until the data is presented for inclusion into the database. This paper addresses the issues concerned with the theoretical design of a clinical dynamic database capable of adapting the internal table structure to accommodate clinical survey data, and presents a prototype database application capable of processing, displaying, and querying the dental data.
Suárez-Castillo, Edna C; Medina-Ortíz, Wanda E; Roig-López, José L; García-Arrarás, José E
2004-06-09
We report the characterization of an ependymin-related gene (EpenHg) from a regenerating intestine cDNA library of the sea cucumber Holothuria glaberrima. This finding is remarkable because no ependymin sequence has ever been reported from invertebrates. Database comparisons of the conceptual translation of the EpenHg gene reveal 63% similarity (47% identity) with mammalian ependymin-related proteins (MERPs) and close relationship with the frog and piscine ependymins. We also report the partial sequences of ependymin representatives from another species of sea cucumber and from a sea urchin species. Conventional and real-time reverse transcriptase polymerase chain reaction (RT-PCRs) show that the gene is expressed in several echinoderm tissues, including esophagus, mesenteries, gonads, respiratory trees, hemal system, tentacles and body wall. Moreover, the ependymin product in the intestine is overexpressed during sea cucumber intestinal regeneration. The discovery of ependymins in echinoderms, a group well known for their regenerative capacities, can give us an insight on the evolution and roles of ependymin molecules.
The Danish Testicular Cancer database.
Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob
2016-01-01
The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A
2014-01-01
We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.
Parallel database search and prime factorization with magnonic holographic memory devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khitun, Alexander
In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploitmore » wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.« less
Parallel database search and prime factorization with magnonic holographic memory devices
NASA Astrophysics Data System (ADS)
Khitun, Alexander
2015-12-01
In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.
ERIC Educational Resources Information Center
Berger, Mary C.; Bourne, Charles P.
1988-01-01
The first paper discusses the factors involved in a decision to provide document delivery services, including user needs, competitive climate, business potential, fit with current business, and logistics of providing the service. The second reviews the kinds of additional products that can be developed as a byproduct of conventional database…
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
Weeks, Clinton S; Humphreys, Michael S; Cornwell, T Bettina
2018-02-01
Brands engaged in sponsorship of events commonly have objectives that depend on consumer memory for the sponsor-event relationship (e.g., sponsorship awareness). Consumers however, often misattribute sponsorships to nonsponsor competitor brands, indicating erroneous memory for these relationships. The current research uses an item and relational memory framework to reveal sponsor brands may inadvertently foster this misattribution when they communicate relational linkages to events. Effects can be explained via differential roles of communicating item information (information that supports processing item distinctiveness) versus relational information (information that supports processing relationships among items) in contributing to memory outcomes. Experiment 1 uses event-cued brand recall to show that correct memory retrieval is best supported by communicating relational information when sponsorship relationships are not obvious (low congruence). In contrast, correct retrieval is best supported by communicating item information when relationships are obvious (high congruence). Experiment 2 uses brand-cued event recall to show that, against conventional marketing recommendations, relational information increases misattribution, whereas item information guards against misattribution. Results suggest sponsor brands must distinguish between item and relational communications to enhance correct retrieval and limit misattribution. Methodologically, the work shows that choice of cueing direction is critical in differentially revealing patterns of correct and incorrect retrieval with pair relationships. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Using SQL Databases for Sequence Similarity Searching and Analysis.
Pearson, William R; Mackey, Aaron J
2017-09-13
Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Development of a pseudo/anonymised primary care research database: Proof-of-concept study.
MacRury, Sandra; Finlayson, Jim; Hussey-Wilson, Susan; Holden, Samantha
2016-06-01
General practice records present a comprehensive source of data that could form a variety of anonymised or pseudonymised research databases to aid identification of potential research participants regardless of location. A proof-of-concept study was undertaken to extract data from general practice systems in 15 practices across the region to form pseudo and anonymised research data sets. Two feasibility studies and a disease surveillance study compared numbers of potential study participants and accuracy of disease prevalence, respectively. There was a marked reduction in screening time and increase in numbers of potential study participants identified with the research repository compared with conventional methods. Accurate disease prevalence was established and enhanced with the addition of selective text mining. This study confirms the potential for development of national anonymised research database from general practice records in addition to improving data collection for local or national audits and epidemiological projects. © The Author(s) 2014.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
System, method and apparatus for generating phrases from a database
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.
Braidy, C; Nazac, A; Legendre, G; Capmas, P; Fernandez, H
2014-09-01
Fertiloscopy is a recent technique designed to explore the tubo-ovarian axis in unexplained infertility. It is a simple outpatient technique, allowing to perform operative procedures, but its position relative to laparoscopy is yet to be defined. A thorough and extensive bibliographical search was undertaken to fully embrace the question, challenging Medline at the National Library of Medicine, Cochrane Library, National Guideline Clearinghouse, Health Technology Assessment Database. All the retrieved articles were classified as either descriptive or comparative studies and evaluated on a set of criteria. Most of the papers described case series coming from a few teams, focusing mainly on the technical aspect of the procedure, like the access rate to the posterior cul-de-sac, the success rate in visualizing the pelvis, the complications rate (mainly rectal perforation), and its operative performance in drilling ovaries for resistant polycystic ovarian syndrome. Comparative studies numbered six trials. They all followed the same design, fertiloscopy preceding conventional laparoscopy in patients taken as their own control. The concordance rate between the two modalities reaches 80% in terms of tubal pathology, adherences and endometriosis, with an estimated reduction of laparoscopies varying from 40% to 93%. The current literature shows a concordance between fertiloscopy and conventional laparoscopic findings for certain parameters in cases of tubal pathology, adherences and endometriosis. The relative positions of these two modalities in unexplained infertility still remain elusive. Copyright © 2014. Published by Elsevier Masson SAS.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
Khan, Aihab; Husain, Syed Afaq
2013-01-01
We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.
Evaluation of relational and NoSQL database architectures to manage genomic annotations.
Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard
2016-12-01
While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.
Caro-Llopis, Alfonso; Rosello, Monica; Orellana, Carmen; Oltra, Silvestre; Monfort, Sandra; Mayo, Sonia; Martinez, Francisco
2016-12-01
Mutations in the X-linked gene MED12 cause at least three different, but closely related, entities of syndromic intellectual disability. Recently, a new syndrome caused by MED13L deleterious variants has been described, which shows similar clinical manifestations including intellectual disability, hypotonia, and other congenital anomalies. Genotyping of 1,256 genes related with neurodevelopment was performed by next-generation sequencing in three unrelated patients and their healthy parents. Clinically relevant findings were confirmed by conventional sequencing. Each patient showed one de novo variant not previously reported in the literature or databases. Two different missense variants were found in the MED12 or MED13L genes and one nonsense mutation was found in the MED13L gene. The phenotypic consequences of these mutations are closely related and/or have been previously reported in one or other gene. Additionally, MED12 and MED13L code for two closely related partners of the mediator kinase module. Consequently, we propose the concept of a common MED12/MED13L clinical spectrum, encompassing Opitz-Kaveggia syndrome, Lujan-Fryns syndrome, Ohdo syndrome, MED13L haploinsufficiency syndrome, and others.
Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling
2005-01-01
Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
Chemical Informatics and the Drug Discovery Knowledge Pyramid
Lushington, Gerald H.; Dong, Yinghua; Theertham, Bhargav
2012-01-01
The magnitude of the challenges in preclinical drug discovery is evident in the large amount of capital invested in such efforts in pursuit of a small static number of eventually successful marketable therapeutics. An explosion in the availability of potentially drug-like compounds and chemical biology data on these molecules can provide us with the means to improve the eventual success rates for compounds being considered at the preclinical level, but only if the community is able to access available information in an efficient and meaningful way. Thus, chemical database resources are critical to any serious drug discovery effort. This paper explores the basic principles underlying the development and implementation of chemical databases, and examines key issues of how molecular information may be encoded within these databases so as to enhance the likelihood that users will be able to extract meaningful information from data queries. In addition to a broad survey of conventional data representation and query strategies, key enabling technologies such as new context-sensitive chemical similarity measures and chemical cartridges are examined, with recommendations on how such resources may be integrated into a practical database environment. PMID:23782037
NASA Astrophysics Data System (ADS)
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
2010-05-01
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.
Games, Patrícia Dias; daSilva, Elói Quintas Gonçalves; Barbosa, Meire de Oliveira; Almeida-Souza, Hebréia Oliveira; Fontes, Patrícia Pereira; deMagalhães, Marcos Jorge; Pereira, Paulo Roberto Gomes; Prates, Maura Vianna; Franco, Gloria Regina; Faria-Campos, Alessandra; Campos, Sérgio Vale Aguiar; Baracat-Pereira, Maria Cristina
2016-12-15
Antimicrobial peptides from plants present mechanisms of action that are different from those of conventional defense agents. They are under-explored but have a potential as commercial antimicrobials. Bell pepper leaves ('Magali R') are discarded after harvesting the fruit and are sources of bioactive peptides. This work reports the isolation by peptidomics tools, and the identification and partially characterization by computational tools of an antimicrobial peptide from bell pepper leaves, and evidences the usefulness of records and the in silico analysis for the study of plant peptides aiming biotechnological uses. Aqueous extracts from leaves were enriched in peptide by salt fractionation and ultrafiltration. An antimicrobial peptide was isolated by tandem chromatographic procedures. Mass spectrometry, automated peptide sequencing and bioinformatics tools were used alternately for identification and partial characterization of the Hevein-like peptide, named HEV-CANN. The computational tools that assisted to the identification of the peptide included BlastP, PSI-Blast, ClustalOmega, PeptideCutter, and ProtParam; conventional protein databases (DB) as Mascot, Protein-DB, GenBank-DB, RefSeq, Swiss-Prot, and UniProtKB; specific for peptides DB as Amper, APD2, CAMP, LAMPs, and PhytAMP; other tools included in ExPASy for Proteomics; The Bioactive Peptide Databases, and The Pepper Genome Database. The HEV-CANN sequence presented 40 amino acid residues, 4258.8 Da, theoretical pI-value of 8.78, and four disulfide bonds. It was stable, and it has inhibited the growth of phytopathogenic bacteria and a fungus. HEV-CANN presented a chitin-binding domain in their sequence. There was a high identity and a positive alignment of HEV-CANN sequence in various databases, but there was not a complete identity, suggesting that HEV-CANN may be produced by ribosomal synthesis, which is in accordance with its constitutive nature. Computational tools for proteomics and databases are not adjusted for short sequences, which hampered HEV-CANN identification. The adjustment of statistical tests in large databases for proteins is an alternative to promote the significant identification of peptides. The development of specific DB for plant antimicrobial peptides, with information about peptide sequences, functional genomic data, structural motifs and domains of molecules, functional domains, and peptide-biomolecule interactions are valuable and necessary.
Kaltiainen, Janne; Lipponen, Jukka; Holtz, Brian C
2017-04-01
This study examines two fundamental concerns in the context of organizational change: employees' perceptions of merger process justice and cognitive trust in the top management team. Our main purpose is to better understand the nature of reciprocal relations between these important constructs through a significant change event. Previous research, building mainly on social exchange theory, has framed trust as a consequence of justice perceptions. More recently, scholars have suggested that this view may be overly simplistic and that trust-related cognitions may also represent an important antecedent of justice perceptions. Using 3-wave longitudinal survey data (N = 622) gathered during a merger process, we tested reciprocal relations over time between cognitive trust in the top management team and perceptions of the merger process justice. In contrast to the conventional unidirectional notion of trust or trust-related cognitions as outcomes of perceived justice, our results show positive reciprocal relations over time between cognitive trust and justice. Our findings also revealed that the positive influence of cognitive trust on subsequent justice perceptions was slightly more robust than the opposite direction. By examining cross-lagged longitudinal relations between these critical psychological reactions, this study contributes across multiple domains of the management literature including trust, justice, and organizational mergers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
High fold computer disk storage DATABASE for fast extended analysis of γ-rays events
NASA Astrophysics Data System (ADS)
Stézowski, O.; Finck, Ch.; Prévost, D.
1999-03-01
Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.
Preliminary assessment of the robustness of dynamic inversion based flight control laws
NASA Technical Reports Server (NTRS)
Snell, S. A.
1992-01-01
Dynamic-inversion-based flight control laws present an attractive alternative to conventional gain-scheduled designs for high angle-of-attack maneuvering, where nonlinearities dominate the dynamics. Dynamic inversion is easily applied to the aircraft dynamics requiring a knowledge of the nonlinear equations of motion alone, rather than an extensive set of linearizations. However, the robustness properties of the dynamic inversion are questionable especially when considering the uncertainties involved with the aerodynamic database during post-stall flight. This paper presents a simple analysis and some preliminary results of simulations with a perturbed database. It is shown that incorporating integrators into the control loops helps to improve the performance in the presence of these perturbations.
Why Save Your Course as a Relational Database?
ERIC Educational Resources Information Center
Hamilton, Gregory C.; Katz, David L.; Davis, James E.
2000-01-01
Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
Relational Database Design in Information Science Education.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1985-01-01
Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…
NED and SIMBAD Conventions for Bibliographic Reference Coding
NASA Technical Reports Server (NTRS)
Schmitz, M.; Helou, G.; Dubois, P.; LaGue, C.; Madore, B.; Jr., H. G. Corwin; Lesteven, S.
1995-01-01
The primary purpose of the 'reference code' is to provide a unique and traceable representation of a bibliographic reference within the structure of each database. The code is used frequently in the interfaces as a succinct abbreviation of a full bibliographic reference. Since its inception, it has become a standard code not only for NED and SIMBAD, but also for other bibliographic services.
1993-01-01
H. Wegner for developing the tactical air and ground force databases and producing the campaign results. Thanks are also due to Group Captain Michael ... Jackson , RAF, for developing the evaluation criteria for NATO’s tactical air force reductions during his stay at RAND. -xi. CONTENTS PREFACE
The Evolution of Well-Being in Spain (1980?2001): A Regional Analysis
ERIC Educational Resources Information Center
Marchante, Andres J.; Sanchez, Jose; Ortega, Bienvenido
2006-01-01
Evaluations of regional welfare conventionally rely on Gross Value Added (GVA) per capital as an indicator of well-being. This paper attempts to re-address the regional welfare issue using alternative indicators to per capital income. With this aim, a database for the Spanish regions (NUTS II) has been constructed for the period 1980-2001 and an…
Ismail, Amin; Cheah, Sook Fun
2003-03-01
As consumer interest in organically grown vegetables is increasing in Malaysia, there is a need to answer whether the vegetables are more nutritious than those conventionally grown. This study investigates commercially available vegetables grown organically and conventionally, purchased from retailers to analyse β-carotene, vitamin C and riboflavin contents. Five types of green vegetables were selected, namely Chinese mustard (sawi) (Brassica juncea), Chinese kale (kai-lan) (Brassica alboglabra), lettuce (daun salad) (Lactuca sativa), spinach (bayam putih) (Amaranthus viridis) and swamp cabbage (kangkung) (Ipomoea aquatica). For vitamin analysis, a reverse-phase high performance liquid chromatography was used to identify and quantify β -carotene, vitamin C and riboflavin. The findings showed that not all of the organically grown vegetables were higher in vitamins than that conventionally grown. This study found that only swamp cabbage grown organically was highest in β -carotene, vitamin C and riboflavin contents among the entire samples studied. The various nutrients in organically grown vegetables need to be analysed for the generation of a database on nutritional value which is important for future research.
Utilization of design data on conventional system to building information modeling (BIM)
NASA Astrophysics Data System (ADS)
Akbar, Boyke M.; Z. R., Dewi Larasati
2017-11-01
Nowadays infrastructure development becomes one of the main priorities in the developed country such as Indonesia. The use of conventional design system is considered no longer effectively support the infrastructure projects, especially for the high complexity building design, due to its fragmented system issues. BIM comes as one of the solutions in managing projects in an integrated manner. Despite of the all known BIM benefits, there are some obstacles on the migration process to BIM. The two main of the obstacles are; the BIM implementation unpreparedness of some project parties and a concerns to leave behind the existing database and create a new one on the BIM system. This paper discusses the utilization probabilities of the existing CAD data from the conventional design system for BIM system. The existing conventional CAD data's and BIM design system output was studied to examine compatibility issues between two subject and followed by an utilization scheme-strategy probabilities. The goal of this study is to add project parties' eagerness in migrating to BIM by maximizing the existing data utilization and hopefully could also increase BIM based project workflow quality.
Conversion of KEGG metabolic pathways to SBGN maps including automatic layout
2013-01-01
Background Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non‐trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Results Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. Conclusions This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result. PMID:23953132
Chen, X; Zhou, H; Liu, Y B; Wang, J F; Li, H; Ung, C Y; Han, L Y; Cao, Z W; Chen, Y Z
2006-12-01
Traditional Chinese Medicine (TCM) is widely practised and is viewed as an attractive alternative to conventional medicine. Quantitative information about TCM prescriptions, constituent herbs and herbal ingredients is necessary for studying and exploring TCM. We manually collected information on TCM in books and other printed sources in Medline. The Traditional Chinese Medicine Information Database TCM-ID, at http://tcm.cz3.nus.edu.sg/group/tcm-id/tcmid.asp, was introduced for providing comprehensive information about all aspects of TCM including prescriptions, constituent herbs, herbal ingredients, molecular structure and functional properties of active ingredients, therapeutic and side effects, clinical indication and application and related matters. TCM-ID currently contains information for 1,588 prescriptions, 1,313 herbs, 5,669 herbal ingredients, and the 3D structure of 3,725 herbal ingredients. The value of the data in TCM-ID was illustrated by using some of the data for an in-silico study of molecular mechanism of the therapeutic effects of herbal ingredients and for developing a computer program to validate TCM multi-herb preparations. The development of systems biology has led to a new design principle for therapeutic intervention strategy, the concept of 'magic shrapnel' (rather than the 'magic bullet'), involving many drugs against multiple targets, administered in a single treatment. TCM offers an extensive source of examples of this concept in which several active ingredients in one prescription are aimed at numerous targets and work together to provide therapeutic benefit. The database and its mining applications described here represent early efforts toward exploring TCM for new theories in drug discovery.
The Cerrado (Brazil) plant cytogenetics database.
Roa, Fernando; Telles, Mariana Pires de Campos
2017-01-01
Cerrado is a biodiversity hotspot that has lost ca. 50% of its original vegetation cover and hosts ca. 11,000 species belonging to 1,423 genera of phanerogams. For a fraction of those species some cytogenetic characteristics like chromosome numbers and C-value were available in databases, while other valuable information such as karyotype formula and banding patterns are missing. In order to integrate and share all cytogenetic information published for Cerrado species, including frequency of cytogenetic attributes and scientometrics aspects, Cerrado plant species were searched in bibliographic sources, including the 50 richest genera (with more than 45 taxa) and 273 genera with only one species in Cerrado. Determination of frequencies and the database website (http://cyto.shinyapps.io/cerrado) were developed in R. Studies were pooled by employed technique and decade, showing a rise in non-conventional cytogenetics since 2000. However, C-value estimation, heterochromatin staining and molecular cytogenetics are still not common for any family. For the richest and best sampled families, the following modal 2n counts were observed: Oxalidaceae 2n = 12, Lythraceae 2n = 30, Sapindaceae 2n = 24, Solanaceae 2n = 24, Cyperaceae 2n = 10, Poaceae 2n = 20, Asteraceae 2n = 18 and Fabaceae 2n = 26. Chromosome number information is available for only 16.1% of species, while there are genome size data for only 1.25%, being lower than the global percentages. In general, genome sizes were small, ranging from 2C = ca. 1.5 to ca. 3.5 pg. Intra-specific 2n number variation and higher 2n counts were mainly related to polyploidy, which relates to the prevalence of even haploid numbers above the mode of 2n in most major plant clades. Several orphan genera with almost no cytogenetic studies for Cerrado were identified. This effort represents a complete diagnosis for cytogenetic attributes of plants of Cerrado.
The Cerrado (Brazil) plant cytogenetics database
Roa, Fernando; Telles, Mariana Pires de Campos
2017-01-01
Abstract Cerrado is a biodiversity hotspot that has lost ca. 50% of its original vegetation cover and hosts ca. 11,000 species belonging to 1,423 genera of phanerogams. For a fraction of those species some cytogenetic characteristics like chromosome numbers and C-value were available in databases, while other valuable information such as karyotype formula and banding patterns are missing. In order to integrate and share all cytogenetic information published for Cerrado species, including frequency of cytogenetic attributes and scientometrics aspects, Cerrado plant species were searched in bibliographic sources, including the 50 richest genera (with more than 45 taxa) and 273 genera with only one species in Cerrado. Determination of frequencies and the database website (http://cyto.shinyapps.io/cerrado) were developed in R. Studies were pooled by employed technique and decade, showing a rise in non-conventional cytogenetics since 2000. However, C-value estimation, heterochromatin staining and molecular cytogenetics are still not common for any family. For the richest and best sampled families, the following modal 2n counts were observed: Oxalidaceae 2n = 12, Lythraceae 2n = 30, Sapindaceae 2n = 24, Solanaceae 2n = 24, Cyperaceae 2n = 10, Poaceae 2n = 20, Asteraceae 2n = 18 and Fabaceae 2n = 26. Chromosome number information is available for only 16.1% of species, while there are genome size data for only 1.25%, being lower than the global percentages. In general, genome sizes were small, ranging from 2C = ca. 1.5 to ca. 3.5 pg. Intra-specific 2n number variation and higher 2n counts were mainly related to polyploidy, which relates to the prevalence of even haploid numbers above the mode of 2n in most major plant clades. Several orphan genera with almost no cytogenetic studies for Cerrado were identified. This effort represents a complete diagnosis for cytogenetic attributes of plants of Cerrado. PMID:28919965
Normand, A C; Packeu, A; Cassagne, C; Hendrickx, M; Ranque, S; Piarroux, R
2018-05-01
Conventional dermatophyte identification is based on morphological features. However, recent studies have proposed to use the nucleotide sequences of the rRNA internal transcribed spacer (ITS) region as an identification barcode of all fungi, including dermatophytes. Several nucleotide databases are available to compare sequences and thus identify isolates; however, these databases often contain mislabeled sequences that impair sequence-based identification. We evaluated five of these databases on a clinical isolate panel. We selected 292 clinical dermatophyte strains that were prospectively subjected to an ITS2 nucleotide sequence analysis. Sequences were analyzed against the databases, and the results were compared to clusters obtained via DNA alignment of sequence segments. The DNA tree served as the identification standard throughout the study. According to the ITS2 sequence identification, the majority of strains (255/292) belonged to the genus Trichophyton , mainly T. rubrum complex ( n = 184), T. interdigitale ( n = 40), T. tonsurans ( n = 26), and T. benhamiae ( n = 5). Other genera included Microsporum (e.g., M. canis [ n = 21], M. audouinii [ n = 10], Nannizzia gypsea [ n = 3], and Epidermophyton [ n = 3]). Species-level identification of T. rubrum complex isolates was an issue. Overall, ITS DNA sequencing is a reliable tool to identify dermatophyte species given that a comprehensive and correctly labeled database is consulted. Since many inaccurate identification results exist in the DNA databases used for this study, reference databases must be verified frequently and amended in line with the current revisions of fungal taxonomy. Before describing a new species or adding a new DNA reference to the available databases, its position in the phylogenetic tree must be verified. Copyright © 2018 American Society for Microbiology.
Yang, Yingxin; Ma, Qiu-Yan; Yang, Yue; He, Yu-Peng; Ma, Chao-Ting; Li, Qiang; Jin, Ming; Chen, Wei
2018-03-01
Primary open angle glaucoma (POAG) is a chronic, progressive optic neuropathy. The aim was to develop an evidence-based clinical practice guideline of Chinese herbal medicine (CHM) for POAG with focus on Chinese medicine pattern differentiation and treatment as well as approved herbal proprietary medicine. The guideline development group involved in various pieces of expertise in contents and methods. Authors searched electronic databases include CNKI, VIP, Sino-Med, Wanfang data, PubMed, the Cochrane Library, EMBASE, as well as checked China State Food and Drug Administration (SFDA) from the inception of these databases to June 30, 2015. Systematic reviews and randomized controlled trials of Chinese herbal medicine treating adults with POAG were evaluated. Risk of bias tool in the Cochrane Handbook and evidence strength developed by the GRADE group were applied for the evaluation, and recommendations were based on the findings incorporating evidence strength. After several rounds of Expert consensus, the final guideline was endorsed by relevant professional committees. CHM treatment principle and formulae based on pattern differentiation together with approved patent herbal medicines are the main treatments for POAG, and the diagnosis and treatment focusing on blood related patterns is the major domain. CHM therapy alone or combined with other conventional treatment reported in clinical studies together with Expert consensus were recommended for clinical practice.
BIOSPIDA: A Relational Database Translator for NCBI.
Hagen, Matthew S; Lee, Eva K
2010-11-13
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.
Seo, Eun-Young; An, Sook Hee; Cho, Jang-Hee; Suh, Hae Sun; Park, Sun-Hee; Gwak, Hyesun; Kim, Yong-Lim; Ha, Hunjoo
2014-01-01
♦ Introduction: Residual renal function (RRF) plays an important role in outcome of peritoneal dialysis (PD) including mortality. It is, therefore, important to provide a strategy for the preservation of RRF. The objective of this study was to evaluate relative protective effects of new glucose-based multicompartmental PD solution (PDS), which is well known to be more biocompatible than glucose-based conventional PDS, on RRF compared to conventional PDS by performing a systematic review (SR) of randomized controlled trials. ♦ Methods: We searched studies presented up to January 2014 in MEDLINE, EMBASE, the COCHRANE library, and local databases. Three independent reviewers reviewed and extracted prespecified data from each study. The random effects model, a more conservative analysis model, was used to combine trials and to perform stratified analyses based on the duration of follow-up. Study quality was assessed using the Cochrane Handbook for risk of bias. Eleven articles with 1,034 patients were identified for the SR. ♦ Results: The heterogeneity of the studies under 12 months was very high, and the heterogeneity decreased substantially when we stratified studies by the duration of follow-up. The mean difference of the studies after 12 months was 0.46 mL/min/1.73 m2 (95% confidence interval = 0.25 to + 0.67). ♦ Conclusion: New PDS showed the effect to preserve and improve RRF for long-term use compared to conventional PDS, even though it did not show a significant difference to preserve RRF for short-term use. PMID:25185015
Regis, R R; Alves, C C S; Rocha, S S M; Negreiros, W A; Freitas-Pontes, K M
2016-10-01
The literature has questioned the real need for some clinical and laboratory procedures considered essential for achieving better results for complete denture fabrication. The aim of this study was to review the current literature concerning the relevance of a two-step impression procedure to achieve better clinical results in fabricating conventional complete dentures. Through an electronic search strategy of the PubMed/MEDLINE database, randomised controlled clinical trials which compared complete denture fabrication in adults in which one or two steps of impressions occurred were identified. The selections were made by three independent reviewers. Among the 540 titles initially identified, four studies (seven published papers) reporting on 257 patients evaluating aspects such as oral health-related quality of life, patient satisfaction with dentures in use, masticatory performance and chewing ability, denture quality, direct and indirect costs were considered eligible. The quality of included studies was assessed according to the Cochrane guidelines. The clinical studies considered for this review suggest that a two-step impression procedure may not be mandatory for the success of conventional complete denture fabrication regarding a variety of clinical aspects of denture quality and patients' perceptions of the treatment. © 2016 John Wiley & Sons Ltd.
Advances in Maize Genomics and Their Value for Enhancing Genetic Gains from Breeding
Xu, Yunbi; Skinner, Debra J.; Wu, Huixia; Palacios-Rojas, Natalia; Araus, Jose Luis; Yan, Jianbing; Gao, Shibin; Warburton, Marilyn L.; Crouch, Jonathan H.
2009-01-01
Maize is an important crop for food, feed, forage, and fuel across tropical and temperate areas of the world. Diversity studies at genetic, molecular, and functional levels have revealed that, tropical maize germplasm, landraces, and wild relatives harbor a significantly wider range of genetic variation. Among all types of markers, SNP markers are increasingly the marker-of-choice for all genomics applications in maize breeding. Genetic mapping has been developed through conventional linkage mapping and more recently through linkage disequilibrium-based association analyses. Maize genome sequencing, initially focused on gene-rich regions, now aims for the availability of complete genome sequence. Conventional insertion mutation-based cloning has been complemented recently by EST- and map-based cloning. Transgenics and nutritional genomics are rapidly advancing fields targeting important agronomic traits including pest resistance and grain quality. Substantial advances have been made in methodologies for genomics-assisted breeding, enhancing progress in yield as well as abiotic and biotic stress resistances. Various genomic databases and informatics tools have been developed, among which MaizeGDB is the most developed and widely used by the maize research community. In the future, more emphasis should be given to the development of tools and strategic germplasm resources for more effective molecular breeding of tropical maize products. PMID:19688107
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
System, method and apparatus for conducting a phrase search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
NASA Astrophysics Data System (ADS)
Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.
2017-12-01
Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.
Yi, Jianru; Li, Meile; Li, Yu; Li, Xiaobing; Zhao, Zhihe
2016-11-21
The aim of this study was to compare the external apical root resorption (EARR) in patients receiving fixed orthodontic treatment with self-ligating or conventional brackets. Studies comparing the EARR between orthodontic patients using self-ligating or conventional brackets were identified through electronic search in databases including CENTRAL, PubMed, EMBASE, China National Knowledge Infrastructure (CNKI) and SIGLE, and manual search in relevant journals and reference lists of the included studies until Apr 2016. The extraction of data and risk of bias evaluation were conducted by two investigators independently. The original outcome underwent statistical pooling by using Review Manager 5. Seven studies were included in the systematic review, out of which, five studies were statistically pooled in meta-analysis. The value of EARR of maxillary central incisors in the self-ligating bracket group was significantly lower than that in the conventional bracket group (SMD -0.31; 95% CI: -0.60--0.01). No significant differences in other incisors were observed between self-ligating and conventional brackets. Current evidences suggest self-ligating brackets do not outperform conventional brackets in reducing the EARR in maxillary lateral incisors, mandible central incisors and mandible lateral incisors. However, self-ligating brackets appear to have an advantage in protecting maxillary central incisor from EARR, which still needs to be confirmed by more high-quality studies.
Zhou, Zheng; Dai, Cong; Liu, Wei-Xin
2015-01-01
TNF-α has an important role in the pathogenesis of ulcerative colitis (UC). It seems that anti-TNF-α therapy is beneficial in the treatment of UC. The aim was to assess the effectiveness of Infliximab and Adalimamab with UC compared with conventional therapy. The Pubmed and Embase databases were searched for studies investigating the efficacy of infliximab and adalimumab on UC. Infliximab had a statistically significant effects in induction of clinical response (RR = 1.67; 95% CI 1.12 to 2.50) of UC compared with conventional therapy, but those had not a statistically significant effects in clinical remission (RR = 1.63; 95% CI 0.84 to 3.18) and reduction of colectomy rate (RR = 0.54; 95% CI 0.26 to 1.12) of UC. And adalimumab had a statistically significant effects in induction of clinical remission (RR = 1.82; 95% CI 1.24 to 2.67) and clinical response (RR = 1.36; 95% CI 1.13 to 1.64) of UC compared with conventional therapy. Our meta-analyses suggested that Infliximab had a statistically significant effects in induction of clinical response of UC compared with conventional therapy and adalimumab had a statistically significant effects in induction of clinical remission and clinical response of UC compared with conventional therapy.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
NGSmethDB 2017: enhanced methylomes and differential methylation
Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L.
2017-01-01
The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. PMID:27794041
Component, Context and Manufacturing Model Library (C2M2L)
2013-03-01
Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database
PACSY, a relational database management system for protein structure and chemical shift analysis.
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L
2012-10-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.
BIOSPIDA: A Relational Database Translator for NCBI
Hagen, Matthew S.; Lee, Eva K.
2010-01-01
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013
Tansaz, Mojgan; Nazemiyeh, Hossein; Fazljou, Seyed Mohammad Bagher
2018-01-01
Introduction Menstrual bleeding cessation is one of the most frequent gynecologic disorders among women in reproductive age. The treatment is based on hormone therapy. Due to the increasing request for alternative medicine remedies in the field of women's diseases, in present study, it was tried to overview medicinal plants used to treat oligomenorrhea and amenorrhea according to the pharmaceutical textbooks of traditional Persian medicine (TPM) and review the evidence in the conventional medicine. Methods This systematic review was designed and performed in 2017 in order to gather information regarding herbal medications of oligomenorrhea and amenorrhea in TPM and conventional medicine. This study had several steps as searching Iranian traditional medicine literature and extracting the emmenagogue plants, classifying the plants, searching the electronic databases, and finding evidences. To search traditional Persian medicine references, Noor digital library was used, which includes several ancient traditional medical references. The classification of plants was done based on the repetition and potency of the plants in the ancient literatures. The required data was gathered using databases such as PubMed, Scopus, Google Scholar, Cochrane Library, Science Direct, and web of knowledge. Results In present study of all 198 emmenagogue medicinal plants found in TPM, 87 cases were specified to be more effective in treating oligomenorrhea and amenorrhea. In second part of present study, where a search of conventional medicine was performed, 12 studies were found, which had 8 plants investigated: Vitex agnus-castus, Trigonella foenum-graecum, Foeniculum vulgare, Cinnamomum verum, Paeonia lactiflora, Sesamum indicum, Mentha longifolia, and Urtica dioica. Conclusion. Traditional Persian medicine has proposed many different medicinal plants for treatment of oligomenorrhea and amenorrhea. Although just few plants have been proven to be effective for treatment of menstrual irregularities, the results and the classification in present study can be used as an outline for future studies and treatment. PMID:29744355
Moini Jazani, Arezoo; Hamdi, Kobra; Tansaz, Mojgan; Nazemiyeh, Hossein; Sadeghi Bazargani, Homayoun; Fazljou, Seyed Mohammad Bagher; Nasimi Doost Azgomi, Ramin
2018-01-01
Menstrual bleeding cessation is one of the most frequent gynecologic disorders among women in reproductive age. The treatment is based on hormone therapy. Due to the increasing request for alternative medicine remedies in the field of women's diseases, in present study, it was tried to overview medicinal plants used to treat oligomenorrhea and amenorrhea according to the pharmaceutical textbooks of traditional Persian medicine (TPM) and review the evidence in the conventional medicine. This systematic review was designed and performed in 2017 in order to gather information regarding herbal medications of oligomenorrhea and amenorrhea in TPM and conventional medicine. This study had several steps as searching Iranian traditional medicine literature and extracting the emmenagogue plants, classifying the plants, searching the electronic databases, and finding evidences. To search traditional Persian medicine references, Noor digital library was used, which includes several ancient traditional medical references. The classification of plants was done based on the repetition and potency of the plants in the ancient literatures. The required data was gathered using databases such as PubMed, Scopus, Google Scholar, Cochrane Library, Science Direct, and web of knowledge. In present study of all 198 emmenagogue medicinal plants found in TPM, 87 cases were specified to be more effective in treating oligomenorrhea and amenorrhea. In second part of present study, where a search of conventional medicine was performed, 12 studies were found, which had 8 plants investigated: Vitex agnus-castus, Trigonella foenum-graecum, Foeniculum vulgare, Cinnamomum verum, Paeonia lactiflora, Sesamum indicum, Mentha longifolia, and Urtica dioica. Conclusion . Traditional Persian medicine has proposed many different medicinal plants for treatment of oligomenorrhea and amenorrhea. Although just few plants have been proven to be effective for treatment of menstrual irregularities, the results and the classification in present study can be used as an outline for future studies and treatment.
Mackey, Aaron J; Pearson, William R
2004-10-01
Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.
ERIC Educational Resources Information Center
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Definitions. 99.1 Section 99.1 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES REPORTING ON CONVENTION AND NON-CONVENTION ADOPTIONS OF EMIGRATING CHILDREN § 99.1 Definitions. As used in this part, the term: (a) Convention means...
EasyKSORD: A Platform of Keyword Search Over Relational Databases
NASA Astrophysics Data System (ADS)
Peng, Zhaohui; Li, Jing; Wang, Shan
Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.
“NaKnowBase”: A Nanomaterials Relational Database
NaKnowBase is an internal relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations...
ERIC Educational Resources Information Center
Takusi, Gabriel Samuto
2010-01-01
This quantitative analysis explored the intrinsic and extrinsic turnover factors of relational database support specialists. Two hundred and nine relational database support specialists were surveyed for this research. The research was conducted based on Hackman and Oldham's (1980) Job Diagnostic Survey. Regression analysis and a univariate ANOVA…
Morales-Gomez, S; Elizagaray-Garcia, I; Yepes-Rojas, O; de la Puente-Ranea, L; Gil-Martinez, A
2018-02-01
Parkinson disease (PD) is the second most common neurodegenerative disease. Virtual reality (VR) is being used in rehabilitation of neurological patients. To analyze the VR systems' therapeutically effectiveness through PD diagnosed subjects with variables of motor, quality of life and cognition. Electronics database were used to look for articles: Medline, EMBASE, PEDro, CINAHL and Cochrane. The inclusion criteria were: randomized control trial (RCT) performed in PD with at least one VR variable included in the therapeutically treatment and diagnosed PD subjects. Four RCT were chosen showing all good methodology quality. Concordance between evaluators was moderate-high. VR was the main treatment in all of them. VR was more effective in balance improvement in PD subjects than conventional physiotherapy in two RCT. VR was not more effective in balance improvement in PD subjects than conventional physiotherapy in two RCT. Contradictory evidences where showed between the effectiveness of the VR programs versus conventional programs in the effectiveness of balance treatment with PD subjects. Non-motor variables improvement was not greater in subjects with VR treatments versus the ones with conventional physiotherapy in the four RCT. The treatments with VR cannot be assumed as more effectives than conventional physiotherapy through PD subjects in motor and psychosocial variables.
Xu, Aili; Du, Hongbo
2017-01-01
Objective This aim is to evaluate the effect of Sijunzi decoction (SJZD) treating chronic atrophic gastritis (CAG). Methods We performed searches in seven databases. The randomized controlled trials (RCTs) comparing SJZD with standard medical care or inactive intervention for CAG were enrolled. Combined therapy of SJZD plus conventional therapies compared with conventional therapies alone was also retrieved. The primary outcome included the incidence of gastric cancer and the improvement of atrophy, intestinal metaplasia, and dysplasia based on the gastroscopy and pathology. The secondary outcomes were Helicobacter pylori clearance rate, quality of life, and adverse event/adverse drug reaction. Results Six RCTs met the inclusion criteria. The research quality was low in the trials. For the overall effect rate, pooled analysis from 4 trials showed that modified SJZD plus conventional medications exhibited a significant improvement (OR = 4.86; 95% CI: 2.80 to 8.44; P < 0.00001) and without significant heterogeneity compared with the conventional medications alone. None reported the adverse effect. Conclusions Modified SJZD combined with conventional western medicines appears to have benefits for CAG. Due to the limited number and methodological flaw, the beneficial and harmful effects of SJZD for CAG could not be identified. More high-quality clinical trials are needed to confirm the results. PMID:29138645
NASA Astrophysics Data System (ADS)
East, J. A., II
2016-12-01
The U.S. Geological Survey's (USGS) Eastern Energy Resources Science Center (EERSC) has an ongoing project which has mapped coal chemistry and stratigraphy since 1977. Over the years, the USGS has collected various forms of coal data and archived that data into the National Coal Resources Data System (NCRDS) database. NCRDS is a repository that houses data from the major coal basins in the United States and includes information on location, seam thickness, coal rank, geologic age, geographic region, geologic province, coalfield, and characteristics of the coal or lithology for that data point. These data points can be linked to the US Coal Quality Database (COALQUAL) to include ultimate, proximate, major, minor and trace-element data. Although coal is an inexpensive energy provider, the United States has shifted away from coal usage recently and branched out into other forms of non-renewable and renewable energy because of environmental concerns. NCRDS's primary method of data capture has been USGS field work coupled with cooperative agreements with state geological agencies and universities doing coal-related research. These agreements are on competitive five-year cycles that have evolved into larger scope research efforts including solid fuel resources such as coal-bed methane, shale gas and oil. Recently these efforts have expanded to include environmental impacts of the use of fossil fuels, which has allowed the USGS to enter into agreements with states for the Geologic CO2 Storage Resources Assessment as required by the Energy Independence and Security Act. In 2016 they expanded into research areas to include geothermal, conventional and unconventional oil and gas. The NCRDS and COALQUAL databases are now online for the public to use, and are in the process of being updated to include new data for other energy resources. Along with this expansion of scope, the database name will change to the National Energy Resources Data System (NERDS) in FY 2017.
Lohse, Keith R; Pathania, Anupriya; Wegman, Rebecca; Boyd, Lara A; Lang, Catherine E
2018-03-01
To use the Centralized Open-Access Rehabilitation database for Stroke to explore reporting of both experimental and control interventions in randomized controlled trials for stroke rehabilitation (including upper and lower extremity therapies). The Centralized Open-Access Rehabilitation database for Stroke was created from a search of MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Cumulative Index of Nursing and Allied Health from the earliest available date to May 31, 2014. A total of 2892 titles were reduced to 514 that were screened by full text. This screening left 215 randomized controlled trials in the database (489 independent groups representing 12,847 patients). Using a mixture of qualitative and quantitative methods, we performed a text-based analysis of how the procedures of experimental and control therapies were described. Experimental and control groups were rated by 2 independent coders according to the Template for Intervention Description and Replication criteria. Linear mixed-effects regression with a random effect of study (groups nested within studies) showed that experimental groups had statistically more words in their procedures (mean, 271.8 words) than did control groups (mean, 154.8 words) (P<.001). Experimental groups had statistically more references in their procedures (mean, 1.60 references) than did control groups (mean, .82 references) (P<.001). Experimental groups also scored significantly higher on the total Template for Intervention Description and Replication checklist (mean score, 7.43 points) than did control groups (mean score, 5.23 points) (P<.001). Control treatments in stroke motor rehabilitation trials are underdescribed relative to experimental treatments. These poor descriptions are especially problematic for "conventional" therapy control groups. Poor reporting is a threat to the internal validity and generalizability of clinical trial results. We recommend authors use preregistered protocols and established reporting criteria to improve transparency. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J
2009-03-01
A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.
NASA Astrophysics Data System (ADS)
Ferroud, Anouck; Chesnaux, Romain; Rafini, Silvain
2018-01-01
The flow dimension parameter n, derived from the Generalized Radial Flow model, is a valuable tool to investigate the actual flow regimes that really occur during a pumping test rather than suppose them to be radial, as postulated by the Theis-derived models. A numerical approach has shown that, when the flow dimension is not radial, using the derivative analysis rather than the conventional Theis and Cooper-Jacob methods helps to estimate much more accurately the hydraulic conductivity of the aquifer. Although n has been analysed in numerous studies including field-based studies, there is a striking lack of knowledge about its occurrence in nature and how it may be related to the hydrogeological setting. This study provides an overview of the occurrence of n in natural aquifers located in various geological contexts including crystalline rock, carbonate rock and granular aquifers. A comprehensive database is compiled from governmental and industrial sources, based on 69 constant-rate pumping tests. By means of a sequential analysis approach, we systematically performed a flow dimension analysis in which straight segments on drawdown-log derivative time series are interpreted as successive, specific and independent flow regimes. To reduce the uncertainties inherent in the identification of n sequences, we used the proprietary SIREN code to execute a dual simultaneous fit on both the drawdown and the drawdown-log derivative signals. Using the stated database, we investigate the frequency with which the radial and non-radial flow regimes occur in fractured rock and granular aquifers, and also provide outcomes that indicate the lack of applicability of Theis-derived models in representing nature. The results also emphasize the complexity of hydraulic signatures observed in nature by pointing out n sequential signals and non-integer n values that are frequently observed in the database.
Flynn, A N; Lyndon, C A; Church, D L
2013-08-01
A case of Actinomyces hongkongensis pelvic actinomycosis in an adult woman is described. Conventional phenotypic tests failed to identify the Gram-positive bacillus isolated from a fluid aspirate of a pelvic abscess. The bacterium was identified by 16S rRNA gene sequencing and analysis using the SmartGene Integrated Database Network System software.
3D automatic Cartesian grid generation for Euler flows
NASA Technical Reports Server (NTRS)
Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.
1993-01-01
We describe a Cartesian grid strategy for the study of three dimensional inviscid flows about arbitrary geometries that uses both conventional and CAD/CAM surface geometry databases. Initial applications of the technique are presented. The elimination of the body-fitted constraint allows the grid generation process to be automated, significantly reducing the time and effort required to develop suitable computational grids for inviscid flowfield simulations.
Advanced Data Format (ADF) Software Library and Users Guide
NASA Technical Reports Server (NTRS)
Smith, Matthew; Smith, Charles A. (Technical Monitor)
1998-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF Core library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.
“NaKnowBase”: A Nanomaterials Relational Database
NaKnowBase is a relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations and poten...
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B
2009-03-01
The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.
Multi-sectorial convergence in greenhouse gas emissions.
Oliveira, Guilherme de; Bourscheidt, Deise Maria
2017-07-01
This paper uses the World Input-Output Database (WIOD) to test the hypothesis of per capita convergence in greenhouse gas (GHG) emissions for a multi-sectorial panel of countries. The empirical strategy applies conventional estimators of random and fixed effects and Arellano and Bond's (1991) GMM to the main pollutants related to the greenhouse effect. For reasonable empirical specifications, the model revealed robust evidence of per capita convergence in CH 4 emissions in the agriculture, food, and services sectors. The evidence of convergence in CO 2 emissions was moderate in the following sectors: agriculture, food, non-durable goods manufacturing, and services. In all cases, the time for convergence was less than 15 years. Regarding emissions by energy use, the largest source of global warming, there was only moderate evidence in the extractive industry sector-all other pollutants presented little or no evidence. Copyright © 2017 Elsevier Ltd. All rights reserved.
Geological mapping goes 3-D in response to societal needs
Thorleifson, H.; Berg, R.C.; Russell, H.A.J.
2010-01-01
The transition to 3-D mapping has been made possible by technological advances in digital cartography, GIS, data storage, analysis, and visualization. Despite various challenges, technological advancements facilitated a gradual transition from 2-D maps to 2.5-D draped maps to 3-D geological mapping, supported by digital spatial and relational databases that can be interrogated horizontally or vertically and viewed interactively. Challenges associated with data collection, human resources, and information management are daunting due to their resource and training requirements. The exchange of strategies at the workshops has highlighted the use of basin analysis to develop a process-based predictive knowledge framework that facilitates data integration. Three-dimensional geological information meets a public demand that fills in the blanks left by conventional 2-D mapping. Two-dimensional mapping will, however, remain the standard method for extensive areas of complex geology, particularly where deformed igneous and metamorphic rocks defy attempts at 3-D depiction.
Studying Turbulence Using Numerical Simulation Databases, 8. Proceedings of the 2000 Summer Program
NASA Technical Reports Server (NTRS)
2000-01-01
The eighth Summer Program of the Center for Turbulence Research took place in the four-week period, July 2 to July 27, 2000. This was the largest CTR Summer Program to date, involving forty participants from the U. S. and nine other countries. Twenty-five Stanford and NASA-Ames staff members facilitated and contributed to most of the Summer projects. Several new topical groups were formed, which reflects a broadening of CTR's interests from conventional studies of turbulence to the use of turbulence analysis tools in applications such as optimization, nanofluidics, biology, astrophysical and geophysical flows. CTR's main role continues to be in providing a forum for the study of turbulence and other multi-scale phenomena for engineering analysis. The impact of the summer program in facilitating intellectual exchange among leading researchers in turbulence and closely related flow physics fields is clearly reflected in the proceedings.
Coca, Steven G.; Ismail-Beigi, Faramarz; Haq, Nowreen; Krumholz, Harlan M.; Parikh, Chirag R.
2013-01-01
Background Aggressive glycemic control has been hypothesized to prevent renal disease in type 2 diabetics. A systematic review was conducted to summarize the benefits of intensive versus conventional glucose control on kidney-related outcomes for adults with type 2 diabetes. Methods Three databases were systematically searched (January 1950 to December 2010) with no language restrictions to identify randomized trials that compared surrogate renal endpoints (micro and macroalbuminuria) and clinical renal endpoints (doubling of serum creatinine, End Stage Renal Disease [ESRD] and death from renal disease) in patients with type 2 diabetes receiving intensive glucose control versus receiving conventional glucose control. Results Seven trials involving 28,065 adults who were followed-up for 2 to 15 years. Compared with conventional control, intensive glucose control reduced the risk for microalbuminuria (risk ratio [RR], 0.86 [95% CI, 0.76 to 0.96]) and macroalbuminuria (RR 0.74 [95% CI, 0.65–0.85]), but not doubling of serum creatinine (RR 1.06 [95% CI, 0.92 to 1.22]), ESRD (RR 0.69 [95% CI, 0.46–1.05]), or death from renal disease (RR 0.99 [95% CI 0.55–1.79]). Meta-regression revealed that larger differences in HbA1C between intensive and conventional therapy at the study level were associated with greater benefit for both micro- and macroalbuminuria. The pooled cumulative incidence of doubling of creatinine, ESRD, and death from renal disease was low (< 4%, <1.5%, and <0.5%, respectively) compared with the surrogate renal endpoints of micro- (23%) and macroalbuminuria (5%). Conclusion Intensive glucose control reduces the risk for microalbuminuria and macroalbuminuria but evidence is lacking that intensive glycemic control reduces the risk for significant clinical renal outcomes such as doubling of creatinine, ESRD or death from renal disease during the years of follow-up of the trials. PMID:22636820
PACSY, a relational database management system for protein structure and chemical shift analysis
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo
2012-01-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636
Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid
2014-03-01
National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.
Fuzzy queries above relational database
NASA Astrophysics Data System (ADS)
Smolka, Pavel; Bradac, Vladimir
2017-11-01
The aim of the theme is to introduce a possibility of fuzzy queries implemented in relational databases. The issue is described on a model which identifies the appropriate part of the problem domain for fuzzy approach. The model is demonstrated on a database of wines focused on searching in it. The construction of the database complies with the Law of the Czech Republic.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
2016-01-01
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.
DNA microarray-based PCR ribotyping of Clostridium difficile.
Schneeberg, Alexander; Ehricht, Ralf; Slickers, Peter; Baier, Vico; Neubauer, Heinrich; Zimmermann, Stefan; Rabold, Denise; Lübke-Becker, Antina; Seyboldt, Christian
2015-02-01
This study presents a DNA microarray-based assay for fast and simple PCR ribotyping of Clostridium difficile strains. Hybridization probes were designed to query the modularly structured intergenic spacer region (ISR), which is also the template for conventional and PCR ribotyping with subsequent capillary gel electrophoresis (seq-PCR) ribotyping. The probes were derived from sequences available in GenBank as well as from theoretical ISR module combinations. A database of reference hybridization patterns was set up from a collection of 142 well-characterized C. difficile isolates representing 48 seq-PCR ribotypes. The reference hybridization patterns calculated by the arithmetic mean were compared using a similarity matrix analysis. The 48 investigated seq-PCR ribotypes revealed 27 array profiles that were clearly distinguishable. The most frequent human-pathogenic ribotypes 001, 014/020, 027, and 078/126 were discriminated by the microarray. C. difficile strains related to 078/126 (033, 045/FLI01, 078, 126, 126/FLI01, 413, 413/FLI01, 598, 620, 652, and 660) and 014/020 (014, 020, and 449) showed similar hybridization patterns, confirming their genetic relatedness, which was previously reported. A panel of 50 C. difficile field isolates was tested by seq-PCR ribotyping and the DNA microarray-based assay in parallel. Taking into account that the current version of the microarray does not discriminate some closely related seq-PCR ribotypes, all isolates were typed correctly. Moreover, seq-PCR ribotypes without reference profiles available in the database (ribotype 009 and 5 new types) were correctly recognized as new ribotypes, confirming the performance and expansion potential of the microarray. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Ntineri, Angeliki; Stergiou, George S; Thijs, Lutgarde; Asayama, Kei; Boggia, José; Boubouchairopoulou, Nadia; Hozawa, Atsushi; Imai, Yutaka; Johansson, Jouni K; Jula, Antti M; Kollias, Anastasios; Luzardo, Leonella; Niiranen, Teemu J; Nomura, Kyoko; Ohkubo, Takayoshi; Tsuji, Ichiro; Tzourio, Christophe; Wei, Fang-Fei; Staessen, Jan A
2016-08-01
Home blood pressure (HBP) measurements are known to be lower than conventional office blood pressure (OBP) measurements. However, this difference might not be consistent across the entire age range and has not been adequately investigated. We assessed the relationship between OBP and HBP with increasing age using the International Database of HOme blood pressure in relation to Cardiovascular Outcome (IDHOCO). OBP, HBP and their difference were assessed across different decades of age. A total of 5689 untreated subjects aged 18-97 years, who had at least two OBP and HBP measurements, were included. Systolic OBP and HBP increased across older age categories (from 112 to 142 mm Hg and from 109 to 136 mm Hg, respectively), with OBP being higher than HBP by ∼7 mm Hg in subjects aged >30 years and lesser in younger subjects (P=0.001). Both diastolic OBP and HBP increased until the age of ∼50 years (from 71 to 79 mm Hg and from 66 to 76 mm Hg, respectively), with OBP being consistently higher than HBP and a trend toward a decreased OBP-HBP difference with aging (P<0.001). Determinants of a larger OBP-HBP difference were younger age, sustained hypertension, nonsmoking and negative cardiovascular disease history. These data suggest that in the general adult population, HBP is consistently lower than OBP across all the decades, but their difference might vary between age groups. Further research is needed to confirm these findings in younger and older subjects and in hypertensive individuals.
Is minimal access spine surgery more cost-effective than conventional spine surgery?
Lubelski, Daniel; Mihalovich, Kathryn E; Skelly, Andrea C; Fehlings, Michael G; Harrop, James S; Mummaneni, Praveen V; Wang, Michael Y; Steinmetz, Michael P
2014-10-15
Systematic review. To summarize and critically review the economic literature evaluating the cost-effectiveness of minimal access surgery (MAS) compared with conventional open procedures for the cervical and lumbar spine. MAS techniques may improve perioperative parameters (length of hospital stay and extent of blood loss) compared with conventional open approaches. However, some have questioned the clinical efficacy of these differences and the associated cost-effectiveness implications. When considering the long-term outcomes, there seem to be no significant differences between MAS and open surgery. PubMed, EMBASE, the Cochrane Collaboration database, University of York, Centre for Reviews and Dissemination (NHS-EED and HTA), and the Tufts CEA Registry were reviewed to identify full economic studies comparing MAS with open techniques prior to December 24, 2013, based on the key questions established a priori. Only economic studies that evaluated and synthesized the costs and consequences of MAS compared with conventional open procedures (i.e., cost-minimization, cost-benefit, cost-effectiveness, or cost-utility) were considered for inclusion. Full text of the articles meeting inclusion criteria were reviewed by 2 independent investigators to obtain the final collection of included studies. The Quality of Health Economic Studies instrument was scored by 2 independent reviewers to provide an initial basis for critical appraisal of included economic studies. The search strategy yielded 198 potentially relevant citations, and 6 studies met the inclusion criteria, evaluating the costs and consequences of MAS versus conventional open procedures performed for the lumbar spine; no studies for the cervical spine met the inclusion criteria. Studies compared MAS tubular discectomy with conventional microdiscectomy, minimal access transforaminal lumbar interbody fusion versus open transforaminal lumbar interbody fusion, and multilevel hemilaminectomy via MAS versus open approach. Overall, the included cost-effectiveness studies generally supported no significant differences between open surgery and MAS lumbar approaches. However, these conclusions are preliminary because there was a paucity of high-quality evidence. Much of the evidence lacked details on methodology for modeling, related assumptions, justification of economic model chosen, and sources and types of included costs and consequences. The follow-up periods were highly variable, indirect costs were not frequently analyzed or reported, and many of the studies were conducted by a single group, thereby limiting generalizability. Prospective studies are needed to define differences and optimal treatment algorithms. 3.
Food Composition Database Format and Structure: A User Focused Approach
Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine
2015-01-01
This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
A cross-cultural comparison of children's imitative flexibility.
Clegg, Jennifer M; Legare, Cristine H
2016-09-01
Recent research with Western populations has demonstrated that children use imitation flexibly to engage in both instrumental and conventional learning. Evidence for children's imitative flexibility in non-Western populations is limited, however, and has only assessed imitation of instrumental tasks. This study (N = 142, 6- to 8-year-olds) demonstrates both cultural continuity and cultural variation in imitative flexibility. Children engage in higher imitative fidelity for conventional tasks than for instrumental tasks in both an industrialized, Western culture (United States), and a subsistence-based, non-Western culture (Vanuatu). Children in Vanuatu engage in higher imitative fidelity of instrumental tasks than in the United States, a potential consequence of cultural variation in child socialization for conformity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Informed consent comprehension and recollection in adult dental patients: A systematic review.
Moreira, Narjara Conduru Fernandes; Pachêco-Pereira, Camila; Keenan, Louanne; Cummings, Greta; Flores-Mir, Carlos
2016-08-01
Patients' ability to recollect and comprehend treatment information plays a fundamental role in their decision making. The authors considered original studies assessing recollection or comprehension of dental informed consent in adults. The authors searched 6 electronic databases and partial gray literature and hand searched and cross-checked reference lists published through April 2015. The authors assessed the risk of bias in the included studies via different validated tools according to the study design. Nineteen studies were included: 5 randomized clinical trials, 8 cross-sectional studies, 3 qualitative studies, 2 mixed-methods studies, and 1 case series. Conventional informed consent processes yielded comprehension results of 27% to 85% and recollection of 20% to 86%, whereas informed consent processes enhanced by additional media ranged from 44% to 93% for comprehension and from 30% to 94% for recollection. Patient self-reported understanding ranged positively, with most patients feeling that they understood all or almost all the information presented. Results of qualitative data analyses indicated that patients did not always understand explanations, although dentists thought they did. Some patients firmly stated that they did not receive any related information. Only a few patients were able to remember complications related to their treatment options. Results of this systematic review should alert dentists that although patients in general report that they understand information given to them, they may have limited comprehension. Additional media may improve conventional informed consent processes in dentistry in a meaningful way. Copyright © 2016 American Dental Association. Published by Elsevier Inc. All rights reserved.
The bias in current measures of gestational weight gain
Hutcheon, Jennifer A; Bodnar, Lisa M; Joseph, KS; Abrams, Barbara; Simhan, Hyagriv N; Platt, Robert W
2014-01-01
Summary Conventional measures of gestational weight gain (GWG), such as average rate of weight gain, are likely correlated with gestational duration. Such correlation could introduce bias to epidemiologic studies of GWG and adverse perinatal outcomes because many perinatal outcomes are also correlated with gestational duration. This study aimed to quantify the extent to which currently-used GWG measures may bias the apparent relation between maternal weight gain and risk of preterm birth. For each woman in a provincial perinatal database registry (British Columbia, Canada, 2000–2009), a total GWG was simulated such that it was uncorrelated with risk of preterm birth. The simulation was based on serial antenatal GWG measurements from a sample of term pregnancies. Simulated GWGs were classified using 3 approaches: total weight gain (kg), average rate of weight gain (kg/week) or adequacy of gestational weight gain in relation to Institute of Medicine recommendations, and their association with preterm birth ≤ 32 weeks was explored using logistic regression. All measures of GWG induced an apparent association between GWG and preterm birth ≤32 weeks even when, by design, none existed. Odds ratios in the lowest fifths of each GWG measure compared with the middle fifths ranged from 4.4 [95% CI 3.6, 5.4] (total weight gain) to 1.6 [95% CI 1.3, 2.0] (Institute of Medicine adequacy ratio). Conventional measures of GWG introduce serious bias to the study of maternal weight gain and preterm birth. A new measure of GWG that is uncorrelated with gestational duration is needed. PMID:22324496
NASA Astrophysics Data System (ADS)
Poulon, Fanny; Ibrahim, Ali; Zanello, Marc; Pallud, Johan; Varlet, Pascale; Malouki, Fatima; Abi Lahoud, Georges; Devaux, Bertrand; Abi Haidar, Darine
2017-02-01
Eliminating time-consuming process of conventional biopsy is a practical improvement, as well as increasing the accuracy of tissue diagnoses and patient comfort. We addressed these needs by developing a multimodal nonlinear endomicroscope that allows real-time optical biopsies during surgical procedure. It will provide immediate information for diagnostic use without removal of tissue and will assist the choice of the optimal surgical strategy. This instrument will combine several means of contrast: non-linear fluorescence, second harmonic generation signal, reflectance, fluorescence lifetime and spectral analysis. Multimodality is crucial for reliable and comprehensive analysis of tissue. Parallel to the instrumental development, we currently improve our understanding of the endogeneous fluorescence signal with the different modalities that will be implemented in the stated. This endeavor will allow to create a database on the optical signature of the diseased and control brain tissues. This proceeding will present the preliminary results of this database on three types of tissues: cortex, metastasis and glioblastoma.
Gasc, Cyrielle; Constantin, Antony; Jaziri, Faouzi; Peyret, Pierre
2017-01-01
The detection and identification of bacterial pathogens involved in acts of bio- and agroterrorism are essential to avoid pathogen dispersal in the environment and propagation within the population. Conventional molecular methods, such as PCR amplification, DNA microarrays or shotgun sequencing, are subject to various limitations when assessing environmental samples, which can lead to inaccurate findings. We developed a hybridization capture strategy that uses a set of oligonucleotide probes to target and enrich biomarkers of interest in environmental samples. Here, we present Oligonucleotide Capture Probes for Pathogen Identification Database (OCaPPI-Db), an online capture probe database containing a set of 1,685 oligonucleotide probes allowing for the detection and identification of 30 biothreat agents up to the species level. This probe set can be used in its entirety as a comprehensive diagnostic tool or can be restricted to a set of probes targeting a specific pathogen or virulence factor according to the user's needs. : http://ocappidb.uca.works. © The Author(s) 2017. Published by Oxford University Press.
Operative record using intraoperative digital data in neurosurgery.
Houkin, K; Kuroda, S; Abe, H
2000-01-01
The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.
Eidietis, N. W.; Gerhardt, S. P.; Granetz, R. S.; ...
2015-05-22
A multi-device database of disruption characteristics has been developed under the auspices of the International Tokamak Physics Activity magneto hydrodynamics topical group. The purpose of this ITPA Disruption Database (IDDB) is to find the commonalities between the disruption and disruption mitigation characteristics in a wide variety of tokamaks in order to elucidate the physics underlying tokamak disruptions and to extrapolate toward much larger devices, such as ITER and future burning plasma devices. Conversely, in order to previous smaller disruption data collation efforts, the IDDB aims to provide significant context for each shot provided, allowing exploration of a wide array ofmore » relationships between pre-disruption and disruption parameters. Furthermore, the IDDB presently includes contributions from nine tokamaks, including both conventional aspect ratio and spherical tokamaks. An initial parametric analysis of the available data is presented. Our analysis includes current quench rates, halo current fraction and peaking, and the effectiveness of massive impurity injection. The IDDB is publicly available, with instruction for access provided herein.« less
GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.
Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun
2013-01-01
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.
Automating the training development process for mission flight operations
NASA Technical Reports Server (NTRS)
Scott, Carol J.
1994-01-01
Traditional methods of developing training do not effectively support the changing needs of operational users in a multimission environment. The Automated Training Development System (ATDS) provides advantages over conventional methods in quality, quantity, turnaround, database maintenance, and focus on individualized instruction. The Operations System Training Group at the JPL performed a six-month study to assess the potential of ATDS to automate curriculum development and to generate and maintain course materials. To begin the study, the group acquired readily available hardware and participated in a two-week training session to introduce the process. ATDS is a building activity that combines training's traditional information-gathering with a hierarchical method for interleaving the elements. The program can be described fairly simply. A comprehensive list of candidate tasks determines the content of the database; from that database, selected critical tasks dictate which competencies of skill and knowledge to include in course material for the target audience. The training developer adds pertinent planning information about each task to the database, then ATDS generates a tailored set of instructional material, based on the specific set of selection criteria. Course material consistently leads students to a prescribed level of competency.
Meyer, Michael J; Geske, Philip; Yu, Haiyuan
2016-05-15
Biological sequence databases are integral to efforts to characterize and understand biological molecules and share biological data. However, when analyzing these data, scientists are often left holding disparate biological currency-molecular identifiers from different databases. For downstream applications that require converting the identifiers themselves, there are many resources available, but analyzing associated loci and variants can be cumbersome if data is not given in a form amenable to particular analyses. Here we present BISQUE, a web server and customizable command-line tool for converting molecular identifiers and their contained loci and variants between different database conventions. BISQUE uses a graph traversal algorithm to generalize the conversion process for residues in the human genome, genes, transcripts and proteins, allowing for conversion across classes of molecules and in all directions through an intuitive web interface and a URL-based web service. BISQUE is freely available via the web using any major web browser (http://bisque.yulab.org/). Source code is available in a public GitHub repository (https://github.com/hyulab/BISQUE). haiyuan.yu@cornell.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Terminological aspects of data elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Gournay, Pierre; Rolland, Benjamin; Mortara, Alessandro; Jeanneret, Blaise
2018-01-01
An on-site comparison of the quantum Hall effect (QHE) resistance standards of the Federal Institute of Metrology METAS (Switzerland) and of the Bureau International des Poids et Mesures (BIPM) was made in December 2017. Measurements of a 100 Ω standard in terms of the conventional value of the von Klitzing constant, RK-90, agreed to 5 parts in 1010 with a relative combined standard uncertainty uc = 23 × 10‑10. Scaling from 100 Ω to 10 kΩ has also been addressed through the measurement of a 10000 Ω/100 Ω ratio. The measurements carried out agreed to 2 parts in 1010 with a relative combined standard uncertainty uc = 19 × 10‑10. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
The evolution of water balance in Glossina (Diptera: Glossinidae): correlations with climate.
Kleynhans, Elsje; Terblanche, John S
2009-02-23
The water balance of tsetse flies (Diptera: Glossinidae) has significant implications for understanding biogeography and climate change responses in these African disease vectors. Although moisture is important for tsetse population dynamics, evolutionary responses of Glossina water balance to climate have been relatively poorly explored and earlier studies may have been confounded by several factors. Here, using a physiological and GIS climate database, we investigate potential interspecific relationships between traits of water balance and climate. We do so in conventional and phylogenetically independent approaches for both adults and pupae. Results showed that water loss rates (WLR) were significantly positively related to precipitation in pupae even after phylogenetic adjustment. Adults showed no physiology-climate correlations. Ancestral trait reconstruction suggests that a reduction in WLR and increased size probably evolved from an intermediate ancestral state and may have facilitated survival in xeric environments. The results of this study therefore suggest an important role for water balance physiology of pupae in determining interspecific variation and lend support to conclusions reached by early studies of tsetse physiology.
Lent, Robert W; Sheu, Hung-Bin; Brown, Steven D
2010-04-01
Armstrong and Vogel (2009) proposed that the differences between self-efficacy and interests are a matter of measurement artifact rather than substance. In tests of this hypothesis, they conceived of self-efficacy and interest as observed indicators of larger RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional) types and as response method factors. We revisit the authors' theoretical assumptions, measurement procedures, analyses, and interpretation of findings. When viewing this study in the context of the larger literature, we find ample support for the construal of self-efficacy and interests as distinct but related constructs. In addition, we examine the authors' reanalysis of earlier longitudinal findings, reaching different conclusions than they did about the nature of the temporal relations among the social cognitive variables. Ultimately, whether one wishes to highlight or minimize the differences between interest and self-efficacy may largely depend on whether one's purpose is explanation (e.g., how do people make career-relevant choices?) or classification (e.g., which RIASEC type does a person most resemble?). PsycINFO Database Record (c) 2010 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alonso, S.; Castro, A.; Fernandez-Fernandez, I.
1997-02-01
Short VNTR alleles that go undetected after conventional Southern blot hybridization may constitute an alternative explanation for the heterozygosity deficiency observed at some minisatellite loci. To examine this hypothesis, we have employed a screening procedure based on PCR amplification of those individuals classified as homozygotes in our databases for the loci D1S7, D7S21, and D12S11. The results obtained indicate that the frequency of these short alleles is related to the heterozygosity deficiency observed. For the most polymorphic locus, D1S7, {approximately}60% of those individuals previously classified as homozygotes were in fact heterozygotes for a short allele. After the inclusion of thesemore » new alleles, the agreement between observed and expected heterozygosity, along with other statistical tests employed, provide additional evidence for lack of population substructuring. Comparisons of allele frequency distributions reveal greater differences between racial groups than between closely related populations. 45 refs., 3 figs., 6 tabs.« less
Janardhanan, Jeshina; Prakash, John Antony Jude; Abraham, Ooriapadickal C; Varghese, George M
2014-05-01
A nested polymerase chain reaction (PCR) targeting the 56-kDa antigen gene is currently the most commonly used molecular technique for confirmation of scrub typhus and genotyping of Orientia tsutsugamushi. In this study, we have compared the commonly used nested PCR (N-PCR) with a single-step conventional PCR (C-PCR) for amplification and genotyping. Eschar samples collected from 24 patients with scrub typhus confirmed by IgM enzyme-linked immunosorbent assay were used for DNA extraction following which amplifications were carried out using nested and C-PCR methods. The amplicons were sequenced and compared to other sequences in the database using BLAST. Conventional PCR showed a high positivity rate of 95.8% compared to the 75% observed using N-PCR. On sequence analysis, the N-PCR amplified region showed more variation among strains than the C-PCR amplified region. The C-PCR, which is more economical, provided faster and better results compared to N-PCR. Copyright © 2014 Elsevier Inc. All rights reserved.
Fernández, José M; Valencia, Alfonso
2004-10-12
Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.
Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae
2008-01-01
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.
Bilal, Muhammad; Iqbal, Muhammad Sarfaraz; Shah, Syed Bilal; Rasheed, Tahir; Iqbal, Hafiz M N
2018-02-21
The naturally inspired treatment options for several disease conditions and human-health related disorders such as diabetes mellitus have gained considerable research interest. In this context, naturally occurring plants and herbs with medicinal functionalities have gained special place than ever before in the current medicinal world. The objective of this review is to extend the current knowledge in the clinical field related to the diabetic complications. A special focus has also been given to the anti-diabetic potentialities of ethnomedicinal plants. Herein, we reviewed and compiled salient information from the authentic bibliographic databases including PubMed, Scopus, Elsevier, Springer, Bentham Science and other scientific databases. The patents were searched and reviewed from http://www.freepatentsonline.com. Diabetes mellitus is a group of metabolic disorders associated with the endocrine system that resulted in hyperglycemic conditions. Metabolic disorders can cause many complications such as neuropathy, retinopathy, nephropathy, ischemic heart disease, stroke, and microangiopathy. Traditional botanical therapies have been used around the world to treat diabetes. Among several medications and different medicines, various herbs are known to cure and control diabetes; also have no side effects. History has shown that medicinal plants have long been used for traditional healing around the world to treat diabetes. More than 800 plants around the world are shown by ethnobotanical information as traditional remedies for the treatment of diabetes. Several parts of these plants have been evaluated and appreciated for hypoglycemic activity. Medicinal plants have been found to be more effective than conventional drug compounds with no/fewer side effects and relatively inexpensive. In this review paper, we have reviewed plants with anti-diabetic and related beneficial medicinal effects. This review may be helpful for researchers, diabetic patient and decision makers in the field of ethnobotanical sciences. These efforts may also provide treatment to everyone and focus on the role of traditional novel medicine plants that have anti-diabetic abilities. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Lu, Y; Baggett, H C; Rhodes, J; Thamthitiwat, S; Joseph, L; Gregory, C J
2016-10-01
Pneumonia is a leading cause of mortality and morbidity worldwide with radiographically confirmed pneumonia a key disease burden indicator. This is usually determined by a radiology panel which is assumed to be the best available standard; however, this assumption may introduce bias into pneumonia incidence estimates. To improve estimates of radiographic pneumonia incidence, we applied Bayesian latent class modelling (BLCM) to a large database of hospitalized patients with acute lower respiratory tract illness in Sa Kaeo and Nakhon Phanom provinces, Thailand from 2005 to 2010 with chest radiographs read by both a radiology panel and a clinician. We compared these estimates to those from conventional analysis. For children aged <5 years, estimated radiographically confirmed pneumonia incidence by BLCM was 2394/100 000 person-years (95% credible interval 2185-2574) vs. 1736/100 000 person-years (95% confidence interval 1706-1766) from conventional analysis. For persons aged ⩾5 years, estimated radiographically confirmed pneumonia incidence was similar between BLCM and conventional analysis (235 vs. 215/100 000 person-years). BLCM suggests the incidence of radiographically confirmed pneumonia in young children is substantially larger than estimated from the conventional approach using radiology panels as the reference standard.
Constructing a Geology Ontology Using a Relational Database
NASA Astrophysics Data System (ADS)
Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.
2013-12-01
In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).
Filipino DNA variation at 12 X-chromosome short tandem repeat markers.
Salvador, Jazelyn M; Apaga, Dame Loveliness T; Delfin, Frederick C; Calacal, Gayvelline C; Dennis, Sheila Estacio; De Ungria, Maria Corazon A
2018-06-08
Demands for solving complex kinship scenarios where only distant relatives are available for testing have risen in the past years. In these instances, other genetic markers such as X-chromosome short tandem repeat (X-STR) markers are employed to supplement autosomal and Y-chromosomal STR DNA typing. However, prior to use, the degree of STR polymorphism in the population requires evaluation through generation of an allele or haplotype frequency population database. This population database is also used for statistical evaluation of DNA typing results. Here, we report X-STR data from 143 unrelated Filipino male individuals who were genotyped via conventional polymerase chain reaction-capillary electrophoresis (PCR-CE) using the 12 X-STR loci included in the Investigator ® Argus X-12 kit (Qiagen) and via massively parallel sequencing (MPS) of seven X-STR loci included in the ForenSeq ™ DNA Signature Prep kit of the MiSeq ® FGx ™ Forensic Genomics System (Illumina). Allele calls between PCR-CE and MPS systems were consistent (100% concordance) across seven overlapping X-STRs. Allele and haplotype frequencies and other parameters of forensic interest were calculated based on length (PCR-CE, 12 X-STRs) and sequence (MPS, seven X-STRs) variations observed in the population. Results of our study indicate that the 12 X-STRs in the PCR-CE system are highly informative for the Filipino population. MPS of seven X-STR loci identified 73 X-STR alleles compared with 55 X-STR alleles that were identified solely by length via PCR-CE. Of the 73 sequence-based alleles observed, six alleles have not been reported in the literature. The population data presented here may serve as a reference Philippine frequency database of X-STRs for forensic casework applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Inácio, Caio Teves; Chalk, Phillip Michael; Magalhães, Alberto M T
2015-01-01
Among the lighter elements having two or more stable isotopes (H, C, N, O, S), δ(15)N appears to be the most promising isotopic marker to differentiate plant products from conventional and organic farms. Organic plant products vary within a range of δ(15)N values of +0.3 to +14.6%, while conventional plant products range from negative to positive values, i.e. -4.0 to +8.7%. The main factors affecting δ(15)N signatures of plants are N fertilizers, biological N2 fixation, plant organs and plant age. Correlations between mode of production and δ(13)C (except greenhouse tomatoes warmed with natural gas) or δ(34)S signatures have not been established, and δ(2)H and δ(18)O are unsuitable markers due to the overriding effect of climate on the isotopic composition of plant-available water. Because there is potential overlap between the δ(15)N signatures of organic and conventionally produced plant products, δ(15)N has seldom been used successfully as the sole criterion for differentiation, but when combined with complementary analytical techniques and appropriate statistical tools, the probability of a correct identification increases. The use of organic fertilizers by conventional farmers or the marketing of organic produce as conventional due to market pressures are additional factors confounding correct identification. The robustness of using δ(15)N to differentiate mode of production will depend on the establishment of databases that have been verified for individual plant products.
National Transportation Atlas Databases : 2001
DOT National Transportation Integrated Search
2001-01-01
The National Transportation Atlas Databases-2001 (NTAD-2001) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...
National Transportation Atlas Databases : 2000
DOT National Transportation Integrated Search
2000-01-01
The National Transportation Atlas Databases-2000 (NTAD-2000) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...
49 CFR 1572.107 - Other analyses.
Code of Federal Regulations, 2011 CFR
2011-10-01
... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...
49 CFR 1572.107 - Other analyses.
Code of Federal Regulations, 2010 CFR
2010-10-01
... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...
49 CFR 1572.107 - Other analyses.
Code of Federal Regulations, 2014 CFR
2014-10-01
... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...
49 CFR 1572.107 - Other analyses.
Code of Federal Regulations, 2012 CFR
2012-10-01
... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...
49 CFR 1572.107 - Other analyses.
Code of Federal Regulations, 2013 CFR
2013-10-01
... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...
Defending against Attribute-Correlation Attacks in Privacy-Aware Information Brokering
NASA Astrophysics Data System (ADS)
Li, Fengjun; Luo, Bo; Liu, Peng; Squicciarini, Anna C.; Lee, Dongwon; Chu, Chao-Hsien
Nowadays, increasing needs for information sharing arise due to extensive collaborations among organizations. Organizations desire to provide data access to their collaborators while preserving full control over the data and comprehensive privacy of their users. A number of information systems have been developed to provide efficient and secure information sharing. However, most of the solutions proposed so far are built atop of conventional data warehousing or distributed database technologies.
Flynn, A. N.; Lyndon, C. A.
2013-01-01
A case of Actinomyces hongkongensis pelvic actinomycosis in an adult woman is described. Conventional phenotypic tests failed to identify the Gram-positive bacillus isolated from a fluid aspirate of a pelvic abscess. The bacterium was identified by 16S rRNA gene sequencing and analysis using the SmartGene Integrated Database Network System software. PMID:23698532
Nicholas R. LaBonte; James Jacobs; Aziz Ebrahimi; Shaneka Lawson; Keith Woeste
2018-01-01
High-throughput sequencing of DNA barcodes, such as the internal transcribed spacer (ITS) of the 16s rRNA sequence, has expanded the ability of researchers to investigate the endophytic fungal communities of living plants. With a large and growing database of complete fungal genomes, it may be possible to utilize portions of fungal symbiont genomes outside conventional...
Wei, Yue; Ma, Li-Xin; Yin, Sheng-Jun; An, Jing; Wei, Qi; Yang, Jin-Xiang
2015-01-01
To assess the clinical effects and safety of Huangqi Jianzhong Tang (HQJZ) for the treatment of chronic gastritis (CG), three English databases and four Chinese databases were searched through the inception to January 2015. In randomized controlled trials (RCTs) comparing HQJZ with placebo, no intervention and western medicine were included. A total of 9 RCTs involving 979 participants were identified. The methodological quality of the included trials was generally poor. Meta-analyses demonstrated that HQJZ plus conventional medicine was more effective in improving overall gastroscopy outcome than western medicine alone for treatment of chronic superficial gastritis with the pooling result of overall improvement [OR 3.78 (1.29,11.06), P = 0.02]. In addition, the combination of HQJZ with antibiotics has higher overall effect rate than antibiotics alone for the treatment of CG [OR 2.60 (1.49,4.54), P = 0.0007]. There were no serious adverse events reported in both the intervention and controlled groups. HQJZ has the potential of improvement of the patients' gastroscopy outcomes, Helicobacter pylori clearance rate, traditional Chinese Medicine syndromes, and overall effect rate alone or in combination use with conventional western medicine for chronic atrophic gastritis. However, due to poor methodological quality, the beneficial effect and safeties of HQJZ for CG could not be confirmed. PMID:26819622
Thakkar, Jay; Redfern, Julie; Khan, Ehsan; Atkins, Emily; Ha, Jeffrey; Vo, Kha; Thiagalingam, Aravinda; Chow, Clara K
2018-05-23
The 'Tobacco, Exercise and Diet Messages' (TEXT ME) study was a 6-month, single-centre randomised clinical trial (RCT) that found a text message support program improved levels of cardiovascular risk factors in patients with coronary heart disease (CHD). The current analyses examined whether receipt of text messages influenced participants' engagement with conventional healthcare resources. The TEXT ME study database (N=710) was linked with routinely collected health department databases. Number of doctor consultations, investigations and cardiac medication prescriptions in the two study groups were compared. The most frequently accessed health service was consultations with a General Practitioner (mean 7.1, s.d. 5.4). The numbers of medical consultations, biochemical tests or cardiac-specific investigations were similar between the study groups. There was at least one prescription registered for statin, ACEI/ARBs and β-blockers in 79, 66 and 50% of patients respectively, with similar refill rates in both the study groups. The study identified TEXT ME text messaging program did not increase use of Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Scheme (PBS) captured healthcare services. The observed benefits of TEXT ME reflect direct effects of intervention independent of conventional healthcare resource engagement.
Fast protein tertiary structure retrieval based on global surface shape similarity.
Sael, Lee; Li, Bin; La, David; Fang, Yi; Ramani, Karthik; Rustamov, Raif; Kihara, Daisuke
2008-09-01
Characterization and identification of similar tertiary structure of proteins provides rich information for investigating function and evolution. The importance of structure similarity searches is increasing as structure databases continue to expand, partly due to the structural genomics projects. A crucial drawback of conventional protein structure comparison methods, which compare structures by their main-chain orientation or the spatial arrangement of secondary structure, is that a database search is too slow to be done in real-time. Here we introduce a global surface shape representation by three-dimensional (3D) Zernike descriptors, which represent a protein structure compactly as a series expansion of 3D functions. With this simplified representation, the search speed against a few thousand structures takes less than a minute. To investigate the agreement between surface representation defined by 3D Zernike descriptor and conventional main-chain based representation, a benchmark was performed against a protein classification generated by the combinatorial extension algorithm. Despite the different representation, 3D Zernike descriptor retrieved proteins of the same conformation defined by combinatorial extension in 89.6% of the cases within the top five closest structures. The real-time protein structure search by 3D Zernike descriptor will open up new possibility of large-scale global and local protein surface shape comparison. 2008 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Stebel, Kerstin; Prata, Fred; Theys, Nicolas; Tampellini, Lucia; Kamstra, Martijn; Zehner, Claus
2014-05-01
Over the last few years there has been a recognition of the utility of satellite measurements to identify and track volcanic emissions that present a natural hazard to human populations. Mitigation of the volcanic hazard to life and the environment requires understanding of the properties of volcanic emissions, identifying the hazard in near real-time and being able to provide timely and accurate forecasts to affected areas. Amongst the many ways to measure volcanic emissions, satellite remote sensing is capable of providing global quantitative retrievals of important microphysical parameters such as ash mass loading, ash particle effective radius, infrared optical depth, SO2 partial and total column abundance, plume altitude, aerosol optical depth and aerosol absorbing index. The eruption of Eyjafjallajökull in April May, 2010 led to increased research and measurement programs to better characterize properties of volcanic ash and the need to establish a data-base in which to store and access these data was confirmed. The European Space Agency (ESA) has recognized the importance of having a quality controlled data-base of satellite retrievals and has funded an activity called Volcanic Ash Strategic Initiative Team VAST (vast.nilu.no) to develop novel remote sensing retrieval schemes and a data-base, initially focused on several recent hazardous volcanic eruptions. In addition, the data-base will host satellite and validation data sets provided from the ESA projects Support to Aviation Control Service SACS (sacs.aeronomie.be) and Study on an end-to-end system for volcanic ash plume monitoring and prediction SMASH. Starting with data for the eruptions of Eyjafjallajökull, Grímsvötn, and Kasatochi, satellite retrievals for Puyhue-Cordon Caulle, Nabro, Merapi, Okmok, Kasatochi and Sarychev Peak will eventually be ingested. Dispersion model simulations are also being included in the data-base. Several atmospheric dispersion models (FLEXPART, SILAM and WRF-Chem) are used in VAST to simulate the dispersion of volcanic ash and SO2 emitted during an eruption. Source terms and dispersion model results will be given. In time, data from conventional in situ sampling instruments, airborne and ground-based remote sensing platforms and other meta-data (bulk ash and gas properties, volcanic setting, volcanic eruption chronologies, potential impacts etc.) will be added. Important applications of the data-base are illustrated related to the ash/aviation problem and to estimating SO2 fluxes from active volcanoes-as a means to diagnose future unrest. The data-base has the potential to provide the natural hazards community with a dynamic atmospheric volcanic hazards map and will be a valuable tool particularly for aviation.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
AgeFactDB--the JenAge Ageing Factor Database--towards data integration in ageing research.
Hühne, Rolf; Thalheim, Torsten; Sühnel, Jürgen
2014-01-01
AgeFactDB (http://agefactdb.jenage.de) is a database aimed at the collection and integration of ageing phenotype data including lifespan information. Ageing factors are considered to be genes, chemical compounds or other factors such as dietary restriction, whose action results in a changed lifespan or another ageing phenotype. Any information related to the effects of ageing factors is called an observation and is presented on observation pages. To provide concise access to the complete information for a particular ageing factor, corresponding observations are also summarized on ageing factor pages. In a first step, ageing-related data were primarily taken from existing databases such as the Ageing Gene Database--GenAge, the Lifespan Observations Database and the Dietary Restriction Gene Database--GenDR. In addition, we have started to include new ageing-related information. Based on homology data taken from the HomoloGene Database, AgeFactDB also provides observation and ageing factor pages of genes that are homologous to known ageing-related genes. These homologues are considered as candidate or putative ageing-related genes. AgeFactDB offers a variety of search and browse options, and also allows the download of ageing factor or observation lists in TSV, CSV and XML formats.
Orris, Greta J.; Cocker, Mark D.; Dunlap, Pamela; Wynn, Jeff C.; Spanski, Gregory T.; Briggs, Deborah A.; Gass, Leila; Bliss, James D.; Bolm, Karen S.; Yang, Chao; Lipin, Bruce R.; Ludington, Stephen; Miller, Robert J.; Słowakiewicz, Mirosław
2014-01-01
This report describes a global, evaporite-related potash deposits and occurrences database and a potash tracts database. Chapter 1 summarizes potash resource history and use. Chapter 2 describes a global potash deposits and occurrences database, which contains more than 900 site records. Chapter 3 describes a potash tracts database, which contains 84 tracts with geology permissive for the presence of evaporite-hosted potash resources, including areas with active evaporite-related potash production, areas with known mineralization that has not been quantified or exploited, and areas with potential for undiscovered potash resources. Chapter 4 describes geographic information system (GIS) data files that include (1) potash deposits and occurrences data, (2) potash tract data, (3) reference databases for potash deposit and tract data, and (4) representative graphics of geologic features related to potash tracts and deposits. Summary descriptive models for stratabound potash-bearing salt and halokinetic potash-bearing salt are included in appendixes A and B, respectively. A glossary of salt- and potash-related terms is contained in appendix C and a list of database abbreviations is given in appendix D. Appendix E describes GIS data files, and appendix F is a guide to using the geodatabase.
Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.
Arita, Masanori; Suwa, Kazuhiro
2008-09-17
In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.
Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database
Arita, Masanori; Suwa, Kazuhiro
2008-01-01
Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113
Calculation of brain atrophy using computed tomography and a new atrophy measurement tool
NASA Astrophysics Data System (ADS)
Bin Zahid, Abdullah; Mikheev, Artem; Yang, Andrew Il; Samadani, Uzma; Rusinek, Henry
2015-03-01
Purpose: To determine if brain atrophy can be calculated by performing volumetric analysis on conventional computed tomography (CT) scans in spite of relatively low contrast for this modality. Materials & Method: CTs for 73 patients from the local Veteran Affairs database were selected. Exclusion criteria: AD, NPH, tumor, and alcohol abuse. Protocol: conventional clinical acquisition (Toshiba; helical, 120 kVp, X-ray tube current 300mA, slice thickness 3-5mm). Locally developed, automatic algorithm was used to segment intracranial cavity (ICC) using (a) white matter seed (b) constrained growth, limited by inner skull layer and (c) topological connectivity. ICC was further segmented into CSF and brain parenchyma using a threshold of 16 Hu. Results: Age distribution: 25-95yrs; (Mean 67+/-17.5yrs.). Significant correlation was found between age and CSF/ICC(r=0.695, p<0.01 2-tailed). A quadratic model (y=0.06-0.001x+2.56x10-5x2 ; where y=CSF/ICC and x=age) was a better fit to data (r=0.716, p < 0.01). This is in agreement with MRI literature. For example, Smith et al. found annual CSF/ICC increase in 58 - 94.5 y.o. individuals to be 0.2%/year, whereas our data, restricted to the same age group yield 0.3%/year(0.2-0.4%/yrs. 95%C.I.). Slightly increased atrophy among elderly VA patients is attributable to the presence of other comorbidities. Conclusion: Brain atrophy can be reliably calculated using automated software and conventional CT. Compared to MRI, CT is more widely available, cheaper, and less affected by head motion due to ~100 times shorter scan time. Work is in progress to improve the precision of the measurements, possibly leading to assessment of longitudinal changes within the patient.
Pharmacological Approach for Managing Pain in Irritable Bowel Syndrome: A Review Article
Chen, Longtu; Ilham, Sheikh J.; Feng, Bin
2017-01-01
Context Visceral pain is a leading symptom for patients with irritable bowel syndrome (IBS) that affects 10% - 20 % of the world population. Conventional pharmacological treatments to manage IBS-related visceral pain is unsatisfactory. Recently, medications have emerged to treat IBS patients by targeting the gastrointestinal (GI) tract and peripheral nerves to alleviate visceral pain while avoiding adverse effects on the central nervous system (CNS). Several investigational drugs for IBS also target the periphery with minimal CNS effects. Evidence of Acquisition In this paper, reputable internet databases from 1960 - 2016 were searched including Pubmed and ClinicalTrials.org, and 97 original articles analyzed. Search was performed based on the following keywords and combinations: irritable bowel syndrome, clinical trial, pain, visceral pain, narcotics, opioid, chloride channel, neuropathy, primary afferent, intestine, microbiota, gut barrier, inflammation, diarrhea, constipation, serotonin, visceral hypersensitivity, nociceptor, sensitization, hyperalgesia. Results Certain conventional pain managing drugs do not effectively improve IBS symptoms, including NSAIDs, acetaminophen, aspirin, and various narcotics. Anxiolytic and antidepressant drugs (Benzodiazepines, TCAs, SSRI and SNRI) can attenuate pain in IBS patients with relevant comorbidities. Clonidine, gabapentin and pregabalin can moderately improve IBS symptoms. Lubiprostone relieves constipation predominant IBS (IBS-C) while loperamide improves diarrhea predominant IBS (IBS-D). Alosetron, granisetron and ondansetron can generally treat pain in IBS-D patients, of which alosetron needs to be used with caution due to cardiovascular toxicity. The optimal drugs for managing pain in IBS-D and IBS-C appear to be eluxadoline and linaclotide, respectively, both of which target peripheral GI tract. Conclusions Conventional pain managing drugs are in general not suitable for treating IBS pain. Medications that target the GI tract and peripheral nerves have better therapeutic profiles by limiting adverse CNS effects. PMID:28824858
Feldman, Peter D; Hay, Linda K; Deberdt, Walter; Kennedy, John S; Hutchins, David S; Hay, Donald P; Hardy, Thomas A; Hoffmann, Vicki P; Hornbuckle, Kenneth; Breier, Alan
2004-01-01
The objective of this study was to investigate risk of diabetes among elderly patients during treatment with antipsychotic medications. We conducted a longitudinal, retrospective study assessing the incidence of new prescription claims for antihyperglycemic agents during antipsychotic therapy. Prescription claims from the AdvancePCS claim database were followed for 6 to 9 months. Study participants consisted of patients in the United States aged 60+ and receiving antipsychotic monotherapy. The following cohorts were studied: an elderly reference population (no antipsychotics: n = 1,836,799), those receiving haloperidol (n = 6481) or thioridazine (n = 1658); all patients receiving any conventional antipsychotic monotherapy (n = 11,546), clozapine (n = 117), olanzapine (n = 5382), quetiapine (n = 1664), and risperidone (n = 12,244), and all patients receiving any atypical antipsychotic monotherapy (n = 19,407). We used Cox proportional hazards regression to determine the risk ratio of diabetes for antipsychotic cohorts relative to the reference population. Covariates included sex and exposure duration. New antihyperglycemic prescription rates were higher in each antipsychotic cohort than in the reference population. Overall rates were no different between atypical and conventional antipsychotic cohorts. Among individual antipsychotic cohorts, rates were highest among patients treated with thioridazine (95% confidence interval [CI], 3.1- 5.7), lowest with quetiapine (95% CI, 1.3-2.9), and intermediate with haloperidol, olanzapine, and risperidone. Among atypical cohorts, only risperidone users had a significantly higher risk (95% CI, 1.05-1.60; P = 0.016) than for haloperidol. Conclusions about clozapine were hampered by the low number of patients. These data suggest that diabetes risk is elevated among elderly patients receiving antipsychotic treatment. However, causality remains to be demonstrated. As a group, the risk for atypical antipsychotic users was not significantly different than for users of conventional antipsychotics.
Meta-analysis of acupuncture therapy for the treatment of stable angina pectoris.
Zhang, Ze; Chen, Min; Zhang, Li; Zhang, Zhe; Wu, Wensheng; Liu, Jun; Yan, Jun; Yang, Guanlin
2015-01-01
Angina pectoris is a common symptom imperiling patients' life quality. The aim of this study is to evaluate the efficacy and safety of acupuncture for stable angina pectoris. Clinical randomized-controlled trials (RCTs) comparing the efficacy of acupuncture to conventional drugs in patients with stable angina pectoris were searched using the following database of PubMed, Medline, Wanfang and CNKI. Overall odds ratio (ORs) and weighted mean difference (MD) with their 95% confidence intervals (CI) were calculated by using fixed- or random-effect models depending on the heterogeneity of the included trials. Total 8 RCTs, including 640 angina pectoris cases with 372 patients received acupuncture therapy and 268 patients received conventional drugs, were included. Overall, our result showed that acupuncture significantly increased the clinical curative effects in the relief of angina symptoms (OR=2.89, 95% CI=1.87-4.47, P<0.00001) and improved the electrocardiography (OR=1.83, 95% CI=1.23-2.71, P=0.003), indicating that acupuncture therapy was superior to conventional drugs. Although there was no significant difference in overall effective rate relating reduction of nitroglycerin between two groups (OR=2.13, 95% CI=0.90-5.07, P=0.09), a significant reduction on nitroglycerin consumption in acupuncture group was found (MD=-0.44, 95% CI=-0.64, -0.24, P<0.0001). Furthermore, the time to onset of angina relief was longer for acupuncture therapy than for traditional medicines (MD=2.44, 95% CI=1.64-3.24, P<0.00001, min). No adverse effects associated with acupuncture therapy were found. Acupuncture may be an effective therapy for stable angina pectoris. More clinical trials are needed to systematically assess the role of acupuncture in angina pectoris.
Meta-analysis of acupuncture therapy for the treatment of stable angina pectoris
Zhang, Ze; Chen, Min; Zhang, Li; Zhang, Zhe; Wu, Wensheng; Liu, Jun; Yan, Jun; Yang, Guanlin
2015-01-01
Angina pectoris is a common symptom imperiling patients’ life quality. The aim of this study is to evaluate the efficacy and safety of acupuncture for stable angina pectoris. Clinical randomized-controlled trials (RCTs) comparing the efficacy of acupuncture to conventional drugs in patients with stable angina pectoris were searched using the following database of PubMed, Medline, Wanfang and CNKI. Overall odds ratio (ORs) and weighted mean difference (MD) with their 95% confidence intervals (CI) were calculated by using fixed- or random-effect models depending on the heterogeneity of the included trials. Total 8 RCTs, including 640 angina pectoris cases with 372 patients received acupuncture therapy and 268 patients received conventional drugs, were included. Overall, our result showed that acupuncture significantly increased the clinical curative effects in the relief of angina symptoms (OR=2.89, 95% CI=1.87-4.47, P<0.00001) and improved the electrocardiography (OR=1.83, 95% CI=1.23-2.71, P=0.003), indicating that acupuncture therapy was superior to conventional drugs. Although there was no significant difference in overall effective rate relating reduction of nitroglycerin between two groups (OR=2.13, 95% CI=0.90-5.07, P=0.09), a significant reduction on nitroglycerin consumption in acupuncture group was found (MD=-0.44, 95% CI=-0.64, -0.24, P<0.0001). Furthermore, the time to onset of angina relief was longer for acupuncture therapy than for traditional medicines (MD=2.44, 95% CI=1.64-3.24, P<0.00001, min). No adverse effects associated with acupuncture therapy were found. Acupuncture may be an effective therapy for stable angina pectoris. More clinical trials are needed to systematically assess the role of acupuncture in angina pectoris. PMID:26131084
Friedmacher, Florian; Till, Holger
2015-11-01
In recent years, the use of robotic-assisted surgery (RAS) has expanded within pediatric surgery. Although increasing numbers of pediatric RAS case-series have been published, the level of evidence remains unclear, with authors mainly focusing on the comparison with open surgery rather than the corresponding laparoscopic approach. The aim of this study was to critically appraise the published literature comparing pediatric RAS with conventional minimally invasive surgery (MIS) in order to evaluate the current best level of evidence. A systematic literature-based search for studies comparing pediatric RAS with corresponding MIS procedures was performed using multiple electronic databases and sources. The level of evidence was determined using the Oxford Centre for Evidence-based Medicine (OCEBM) criteria. A total of 20 studies met defined inclusion criteria, reporting on five different procedures: fundoplication (n=8), pyeloplasty (n=8), nephrectomy (n=2), gastric banding (n=1), and sleeve gastrectomy (n=1). Included publications comprised 5 systematic reviews and 15 cohort/case-control studies (OCEBM Level 3 and 4, respectively). No studies of OCEBM Level 1 or 2 were identified. Limited evidence indicated reduced operative time (pyeloplasty) and shorter hospital stay (fundoplication) for pediatric RAS, whereas disadvantages were longer operative time (fundoplication, nephrectomy, gastric banding, and sleeve gastrectomy) and higher total costs (fundoplication and sleeve gastrectomy). There were no differences reported for complications, success rates, or short-term outcomes between pediatric RAS and conventional MIS in these procedures. Inconsistency was found in study design and follow-up with large clinical heterogeneity. The best available evidence for pediatric RAS is currently OCEBM Level 3, relating only to fundoplication and pyeloplasty. Therefore, higher-quality studies and comparative data for other RAS procedures in pediatric surgery are required.
Wang, Linhui; Wu, Zhenjie; Li, Mingmin; Cai, Chen; Liu, Bing; Yang, Qing; Sun, Yinghao
2013-06-01
To assess the surgical efficacy and potential advantages of laparoendoscopic single-site adrenalectomy (LESS-AD) compared with conventional laparoscopic adrenalectomy (CL-AD) based on published literature. An online systematic search in electronic databasesM including Pubmed, Embase, and the Cochrane Library, as well as manual bibliography searches were performed. All studies that compared LESS-AD with CL-AD were included. The outcome measures were the patient demographics, tumor size, blood loss, operative time, time to resumption of oral intake, hospital stay, postoperative pain, cosmesis satisfaction score, rates of complication, conversion, and transfusion. A meta-analysis of the results was conducted. A total of 443 patients were included: 171 patients in the LESS-AD group and 272 patients in the CL-AD group (nine studies). There was no significant difference between the two groups in any of the demographic parameters expect for lesion size (age: P=0.24; sex: P=0.35; body mass index: P=0.79; laterality: P=0.76; size: P=0.002). There was no significant difference in estimated blood loss, time to oral intake resumption, and length of stay between the two groups. The LESS-AD patients had a significantly lower postoperative visual analog pain score compared with the CL-AD group, but a longer operative time was noted. Both groups had a comparable cosmetic satisfaction score. The two groups had a comparable rate of complication, conversion, and transfusion. In early experience, LESS-AD appears to be a safe and feasible alternative to its conventional laparoscopic counterpart with decreased postoperative pain noted, albeit with a longer operative time. As a promising and emerging minimally invasive technique, however, the current evidence has not verified other potential advantages (ie, cosmesis, recovery time, convalescence, port-related complications, etc.) of LESS-AD.
Deitch, Iris; Amer, Radgonde; Tomkins-Netzer, Oren; Habot-Wilner, Zohar; Friling, Ronit; Neumann, Ron; Kramer, Michal
2018-04-01
This study aimed to report the clinical outcome of children with uveitis treated with anti-tumor necrosis factor alpha (TNF-α) agents. This included a retrospective cohort study. Children with uveitis treated with infliximab or adalimumab in 2008-2014 at five dedicated uveitis clinics were identified by database search. Their medical records were reviewed for demographic data, clinical presentation, ocular complications, and visual outcome. Systemic side effects and the steroid-sparing effect of treatment were documented. The cohort included 24 patients (43 eyes) of whom 14 received infliximab and 10 received adalimumab after failing conventional immunosuppression therapy. Mean age was 9.3 ± 4.0 years. The most common diagnosis was juvenile idiopathic arthritis-related uveitis (n = 10), followed by Behçet's disease (n = 4), sarcoidosis (n = 1), and ankylosing spondylitis (n = 1); eight had idiopathic uveitis. Ocular manifestations included panuveitis in 20 eyes (46.5%), chronic anterior uveitis in 19 (44.2%), and intermediate uveitis in 4 (9.3%). The duration of biologic treatment ranged from 6 to 72 months. During the 12 months prior to biologic treatment, while on conventional immunosuppressive therapy, mean visual acuity deteriorated from 0.22 to 0.45 logMAR, with a trend of recovery to 0.25 at 3 months after initiation of biologic treatment, remaining stable thereafter. A full corticosteroid-sparing effect was demonstrated in 16 of the 19 patients (84.2%) for whom data were available. Treatment was well tolerated. Treatment of pediatric uveitis with anti-TNF-α agents may improve outcome while providing steroid-sparing effect, when conventional immunosuppression fails. The role of anti-TNF-α agents as first-line treatment should be further investigated in controlled prospective clinical trials.
Identification and analysis of multigene families by comparison of exon fingerprints.
Brown, N P; Whittaker, A J; Newell, W R; Rawlings, C J; Beck, S
1995-06-02
Gene families are often recognised by sequence homology using similarity searching to find relationships, however, genomic sequence data provides gene architectural information not used by conventional search methods. In particular, intron positions and phases are expected to be relatively conserved features, because mis-splicing and reading frame shifts should be selected against. A fast search technique capable of detecting possible weak sequence homologies apparent at the intron/exon level of gene organization is presented for comparing spliceosomal genes and gene fragments. FINEX compares strings of exons delimited by intron/exon boundary positions and intron phases (exon fingerprint) using a global dynamic programming algorithm with a combined intron phase identity and exon size dissimilarity score. Exon fingerprints are typically two orders of magnitude smaller than their nucleic acid sequence counterparts giving rise to fast search times: a ranked search against a library of 6755 fingerprints for a typical three exon fingerprint completes in under 30 seconds on an ordinary workstation, while a worst case largest fingerprint of 52 exons completes in just over one minute. The short "sequence" length of exon fingerprints in comparisons is compensated for by the large exon alphabet compounded of intron phase types and a wide range of exon sizes, the latter contributing the most information to alignments. FINEX performs better in some searches than conventional methods, finding matches with similar exon organization, but low sequence homology. A search using a human serum albumin finds all members of the multigene family in the FINEX database at the top of the search ranking, despite very low amino acid percentage identities between family members. The method should complement conventional sequence searching and alignment techniques, offering a means of identifying otherwise hard to detect homologies where genomic data are available.
The Safety and Efficacy of Approaches to Liver Resection: A Meta-Analysis
Hauch, Adam; Hu, Tian; Buell, Joseph F.; Slakey, Douglas P.; Kandil, Emad
2015-01-01
Background: The aim of this study is to compare the safety and efficacy of conventional laparotomy with those of robotic and laparoscopic approaches to hepatectomy. Database: Independent reviewers conducted a systematic review of publications in PubMed and Embase, with searches limited to comparative articles of laparoscopic hepatectomy with either conventional or robotic liver approaches. Outcomes included total operative time, estimated blood loss, length of hospitalization, resection margins, postoperative complications, perioperative mortality rates, and cost measures. Outcome comparisons were calculated using random-effects models to pool estimates of mean net differences or of the relative risk between group outcomes. Forty-nine articles, representing 3702 patients, comprise this analysis: 1901 (51.35%) underwent a laparoscopic approach, 1741 (47.03%) underwent an open approach, and 60 (1.62%) underwent a robotic approach. There was no difference in total operative times, surgical margins, or perioperative mortality rates among groups. Across all outcome measures, laparoscopic and robotic approaches showed no difference. As compared with the minimally invasive groups, patients undergoing laparotomy had a greater estimated blood loss (pooled mean net change, 152.0 mL; 95% confidence interval, 103.3–200.8 mL), a longer length of hospital stay (pooled mean difference, 2.22 days; 95% confidence interval, 1.78–2.66 days), and a higher total complication rate (odds ratio, 0.5; 95% confidence interval, 0.42–0.57). Conclusion: Minimally invasive approaches to liver resection are as safe as conventional laparotomy, affording less estimated blood loss, shorter lengths of hospitalization, lower perioperative complication rates, and equitable oncologic integrity and postoperative mortality rates. There was no proven advantage of robotic approaches compared with laparoscopic approaches. PMID:25848191
Code of Federal Regulations, 2013 CFR
2013-01-01
... convention committee employees, volunteers and similar personnel, whose responsibilities involve planning... services related to the convention; (iv) Expenses of national committee employees, volunteers or other... convention committee employees, consultants, volunteers and convention officials in recognition of convention...
Code of Federal Regulations, 2012 CFR
2012-01-01
... convention committee employees, volunteers and similar personnel, whose responsibilities involve planning... services related to the convention; (iv) Expenses of national committee employees, volunteers or other... convention committee employees, consultants, volunteers and convention officials in recognition of convention...
Code of Federal Regulations, 2011 CFR
2011-01-01
... convention committee employees, volunteers and similar personnel, whose responsibilities involve planning... services related to the convention; (iv) Expenses of national committee employees, volunteers or other... convention committee employees, consultants, volunteers and convention officials in recognition of convention...
Code of Federal Regulations, 2014 CFR
2014-01-01
... convention committee employees, volunteers and similar personnel, whose responsibilities involve planning... services related to the convention; (iv) Expenses of national committee employees, volunteers or other... convention committee employees, consultants, volunteers and convention officials in recognition of convention...
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Scherer, A; Kröpil, P; Heusch, P; Buchbender, C; Sewerin, P; Blondin, D; Lanzman, R S; Miese, F; Ostendorf, B; Bölke, E; Mödder, U; Antoch, G
2011-11-01
Medical curricula are currently being reformed in order to establish superordinated learning objectives, including, e.g., diagnostic, therapeutic and preventive competences. This requires a shifting from traditional teaching methods towards interactive and case-based teaching concepts. Conceptions, initial experiences and student evaluations of a novel radiological course Co-operative Learning In Clinical Radiology (CLICR) are presented in this article. A novel radiological teaching course (CLICR course), which combines different innovative teaching elements, was established and integrated into the medical curriculum. Radiological case vignettes were created for three clinical teaching modules. By using a PC with PACS (Picture Archiving and Communication System) access, web-based databases and the CASUS platform, a problem-oriented, case-based and independent way of learning was supported as an adjunct to the well established radiological courses and lectures. Student evaluations of the novel CLICR course and the radiological block course were compared. Student evaluations of the novel CLICR course were significantly better compared to the conventional radiological block course. Of the participating students 52% gave the highest rating for the novel CLICR course concerning the endpoint overall satisfaction as compared to 3% of students for the conventional block course. The innovative interactive concept of the course and the opportunity to use a web-based database were favorably accepted by the students. Of the students 95% rated the novel course concept as a substantial gain for the medical curriculum and 95% also commented that interactive working with the PACS and a web-based database (82%) promoted learning and understanding. Interactive, case-based teaching concepts such as the presented CLICR course are considered by both students and teachers as useful extensions to the radiological course program. These concepts fit well into competence-oriented curricula.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
NGSmethDB 2017: enhanced methylomes and differential methylation.
Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L
2017-01-04
The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
You, Leiming; Wu, Jiexin; Feng, Yuchao; Fu, Yonggui; Guo, Yanan; Long, Liyuan; Zhang, Hui; Luan, Yijie; Tian, Peng; Chen, Liangfu; Huang, Guangrui; Huang, Shengfeng; Li, Yuxin; Li, Jie; Chen, Chengyong; Zhang, Yaqing; Chen, Shangwu; Xu, Anlong
2015-01-01
Increasing amounts of genes have been shown to utilize alternative polyadenylation (APA) 3′-processing sites depending on the cell and tissue type and/or physiological and pathological conditions at the time of processing, and the construction of genome-wide database regarding APA is urgently needed for better understanding poly(A) site selection and APA-directed gene expression regulation for a given biology. Here we present a web-accessible database, named APASdb (http://mosas.sysu.edu.cn/utr), which can visualize the precise map and usage quantification of different APA isoforms for all genes. The datasets are deeply profiled by the sequencing alternative polyadenylation sites (SAPAS) method capable of high-throughput sequencing 3′-ends of polyadenylated transcripts. Thus, APASdb details all the heterogeneous cleavage sites downstream of poly(A) signals, and maintains near complete coverage for APA sites, much better than the previous databases using conventional methods. Furthermore, APASdb provides the quantification of a given APA variant among transcripts with different APA sites by computing their corresponding normalized-reads, making our database more useful. In addition, APASdb supports URL-based retrieval, browsing and display of exon-intron structure, poly(A) signals, poly(A) sites location and usage reads, and 3′-untranslated regions (3′-UTRs). Currently, APASdb involves APA in various biological processes and diseases in human, mouse and zebrafish. PMID:25378337
Lobar lung transplantation from deceased donors: A systematic review
Eberlein, Michael; Reed, Robert M; Chahla, Mayy; Bolukbas, Servet; Blevins, Amy; Van Raemdonck, Dirk; Stanzi, Alessia; Inci, Ilhan; Marasco, Silvana; Shigemura, Norihisa; Aigner, Clemens; Deuse, Tobias
2017-01-01
AIM To systematically review reports on deceased-donor-lobar lung transplantation (ddLLTx) and uniformly describe size matching using the donor-to-recipient predicted-total lung-capacity (pTLC) ratio. METHODS We set out to systematically review reports on ddLLTx and uniformly describe size matching using the donor-to-recipient pTLC ratio and to summarize reported one-year survival data of ddLLTx and conventional-LTx. We searched in PubMed, CINAHL via EBSCO, Cochrane Database of Systematic Reviews via Wiley (CDSR), Database of Abstracts of Reviews of Effects via Wiley (DARE), Cochrane Central Register of Controlled Trials via Wiley (CENTRAL), Scopus (which includes EMBASE abstracts), and Web of Science for original reports on ddLLTx. RESULTS Nine observational cohort studies reporting on 301 ddLLTx met our inclusion criteria for systematic review of size matching, and eight for describing one-year-survival. The ddLLTx-group was often characterized by high acuity; however there was heterogeneity in transplant indications and pre-operative characteristics between studies. Data to calculate the pTLC ratio was available for 242 ddLLTx (80%). The mean pTLCratio before lobar resection was 1.25 ± 0.3 and the transplanted pTLCratio after lobar resection was 0.76 ± 0.2. One-year survival in the ddLLTx-group ranged from 50%-100%, compared to 72%-88% in the conventional-LTx group. In the largest study ddLLTx (n = 138) was associated with a lower one-year-survival compared to conventional-LTx (n = 539) (65.1% vs 84.1%, P < 0.001). CONCLUSION Further investigations of optimal donor-to-recipient size matching parameters for ddLLTx could improve outcomes of this important surgical option. PMID:28280698
Lobar lung transplantation from deceased donors: A systematic review.
Eberlein, Michael; Reed, Robert M; Chahla, Mayy; Bolukbas, Servet; Blevins, Amy; Van Raemdonck, Dirk; Stanzi, Alessia; Inci, Ilhan; Marasco, Silvana; Shigemura, Norihisa; Aigner, Clemens; Deuse, Tobias
2017-02-24
To systematically review reports on deceased-donor-lobar lung transplantation (ddLLTx) and uniformly describe size matching using the donor-to-recipient predicted-total lung-capacity (pTLC) ratio. We set out to systematically review reports on ddLLTx and uniformly describe size matching using the donor-to-recipient pTLC ratio and to summarize reported one-year survival data of ddLLTx and conventional-LTx. We searched in PubMed, CINAHL via EBSCO, Cochrane Database of Systematic Reviews via Wiley (CDSR), Database of Abstracts of Reviews of Effects via Wiley (DARE), Cochrane Central Register of Controlled Trials via Wiley (CENTRAL), Scopus (which includes EMBASE abstracts), and Web of Science for original reports on ddLLTx. Nine observational cohort studies reporting on 301 ddLLTx met our inclusion criteria for systematic review of size matching, and eight for describing one-year-survival. The ddLLTx-group was often characterized by high acuity; however there was heterogeneity in transplant indications and pre-operative characteristics between studies. Data to calculate the pTLC ratio was available for 242 ddLLTx (80%). The mean pTLCratio before lobar resection was 1.25 ± 0.3 and the transplanted pTLCratio after lobar resection was 0.76 ± 0.2. One-year survival in the ddLLTx-group ranged from 50%-100%, compared to 72%-88% in the conventional-LTx group. In the largest study ddLLTx ( n = 138) was associated with a lower one-year-survival compared to conventional-LTx ( n = 539) (65.1% vs 84.1%, P < 0.001). Further investigations of optimal donor-to-recipient size matching parameters for ddLLTx could improve outcomes of this important surgical option.
An Animated Introduction to Relational Databases for Many Majors
ERIC Educational Resources Information Center
Dietrich, Suzanne W.; Goelman, Don; Borror, Connie M.; Crook, Sharon M.
2015-01-01
Database technology affects many disciplines beyond computer science and business. This paper describes two animations developed with images and color that visually and dynamically introduce fundamental relational database concepts and querying to students of many majors. The goal is for educators in diverse academic disciplines to incorporate the…
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Brimhall, Bradley B; Hall, Timothy E; Walczak, Steven
2006-01-01
A hospital laboratory relational database, developed over eight years, has demonstrated significant cost savings and a substantial financial return on investment (ROI). In addition, the database has been used to measurably improve laboratory operations and the quality of patient care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Nix, Lisa Simirenko
2006-10-25
The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.
The relational clinical database: a possible solution to the star wars in registry systems.
Michels, D K; Zamieroski, M
1990-12-01
In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.
New tools and methods for direct programmatic access to the dbSNP relational database.
Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.
ERIC Educational Resources Information Center
Blair, John C., Jr.
1982-01-01
Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-10-18
Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-01-01
Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017
Yang, Yingxin; Ma, Qiu-yan; Yang, Yue; He, Yu-peng; Ma, Chao-ting; Li, Qiang; Jin, Ming; Chen, Wei
2018-01-01
Abstract Background: Primary open angle glaucoma (POAG) is a chronic, progressive optic neuropathy. The aim was to develop an evidence-based clinical practice guideline of Chinese herbal medicine (CHM) for POAG with focus on Chinese medicine pattern differentiation and treatment as well as approved herbal proprietary medicine. Methods: The guideline development group involved in various pieces of expertise in contents and methods. Authors searched electronic databases include CNKI, VIP, Sino-Med, Wanfang data, PubMed, the Cochrane Library, EMBASE, as well as checked China State Food and Drug Administration (SFDA) from the inception of these databases to June 30, 2015. Systematic reviews and randomized controlled trials of Chinese herbal medicine treating adults with POAG were evaluated. Risk of bias tool in the Cochrane Handbook and evidence strength developed by the GRADE group were applied for the evaluation, and recommendations were based on the findings incorporating evidence strength. After several rounds of Expert consensus, the final guideline was endorsed by relevant professional committees. Results: CHM treatment principle and formulae based on pattern differentiation together with approved patent herbal medicines are the main treatments for POAG, and the diagnosis and treatment focusing on blood related patterns is the major domain. Conclusion: CHM therapy alone or combined with other conventional treatment reported in clinical studies together with Expert consensus were recommended for clinical practice. PMID:29595636
Evaluation of handwriting kinematics and pressure for differential diagnosis of Parkinson's disease.
Drotár, Peter; Mekyska, Jiří; Rektorová, Irena; Masarová, Lucia; Smékal, Zdeněk; Faundez-Zanuy, Marcos
2016-02-01
We present the PaHaW Parkinson's disease handwriting database, consisting of handwriting samples from Parkinson's disease (PD) patients and healthy controls. Our goal is to show that kinematic features and pressure features in handwriting can be used for the differential diagnosis of PD. The database contains records from 37 PD patients and 38 healthy controls performing eight different handwriting tasks. The tasks include drawing an Archimedean spiral, repetitively writing orthographically simple syllables and words, and writing of a sentence. In addition to the conventional kinematic features related to the dynamics of handwriting, we investigated new pressure features based on the pressure exerted on the writing surface. To discriminate between PD patients and healthy subjects, three different classifiers were compared: K-nearest neighbors (K-NN), ensemble AdaBoost classifier, and support vector machines (SVM). For predicting PD based on kinematic and pressure features of handwriting, the best performing model was SVM with classification accuracy of Pacc=81.3% (sensitivity Psen=87.4% and specificity of Pspe=80.9%). When evaluated separately, pressure features proved to be relevant for PD diagnosis, yielding Pacc=82.5% compared to Pacc=75.4% using kinematic features. Experimental results showed that an analysis of kinematic and pressure features during handwriting can help assess subtle characteristics of handwriting and discriminate between PD patients and healthy controls. Copyright © 2016 Elsevier B.V. All rights reserved.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Short dental implants: an emerging concept in implant treatment.
Al-Hashedi, Ashwaq Ali; Taiyeb Ali, Tara Bai; Yunus, Norsiah
2014-06-01
Short implants have been advocated as a treatment option in many clinical situations where the use of conventional implants is limited. This review outlines the effectiveness and clinical outcomes of using short implants as a valid treatment option in the rehabilitation of edentulous atrophic alveolar ridges. Initially, an electronic search was performed on the following databases: Medline, PubMed, Embase, Cochrane Database of Systematic Reviews, and DARE using key words from January 1990 until May 2012. An additional hand search was included for the relevant articles in the following journals: International Journal of Oral and Maxillofacial Implants, Clinical Oral Implants Research, Journal of Clinical Periodontology, International Journal of Periodontics, Journal of Periodontology, and Clinical Implant Dentistry and Related Research. Any relevant papers from the journals' references were hand searched. Articles were included if they provided detailed data on implant length, reported survival rates, mentioned measures for implant failure, were in the English language, involved human subjects, and researched implants inserted in healed atrophic ridges with a follow-up period of at least 1 year after implant-prosthesis loading. Short implants demonstrated a high rate of success in the replacement of missing teeth in especially atrophic alveolar ridges. The advanced technology and improvement of the implant surfaces have encouraged the success of short implants to a comparable level to that of standard implants. However, further randomized controlled clinical trials and prospective studies with longer follow-up periods are needed.
Microlithography and resist technology information at your fingertips via SciFinder
NASA Astrophysics Data System (ADS)
Konuk, Rengin; Macko, John R.; Staggenborg, Lisa
1997-07-01
Finding and retrieving the information you need about microlithography and resist technology in a timely fashion can make or break your competitive edge in today's business environment. Chemical Abstracts Service (CAS) provides the most complete and comprehensive database of the chemical literature in the CAplus, REGISTRY, and CASREACT files including 13 million document references, 15 million substance records and over 1.2 million reactions. This includes comprehensive coverage of positive and negative resist formulations and processing, photoacid generation, silylation, single and multilayer resist systems, photomasks, dry and wet etching, photolithography, electron-beam, ion-beam and x-ray lithography technologies and process control, optical tools, exposure systems, radiation sources and steppers. Journal articles, conference proceedings and patents related to microlithography and resist technology are analyzed and indexed by scientific information analysts with strong technical background in these areas. The full CAS database, which is updated weekly with new information, is now available at your desktop, via a convenient, user-friendly tool called 'SciFinder.' Author, subject and chemical substance searching is simplified by SciFinder's smart search features. Chemical substances can be searched by chemical structure, chemical name, CAS registry number or molecular formula. Drawing chemical structures in SciFinder is easy and does not require compliance with CA conventions. Built-in intelligence of SciFinder enables users to retrieve substances with multiple components, tautomeric forms and salts.
Biological Databases for Human Research
Zou, Dong; Ma, Lina; Yu, Jun; Zhang, Zhang
2015-01-01
The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation. PMID:25712261
Zhou, Zheng; Dai, Cong; Liu, Wei-xin
2015-06-01
TNF-α has an important role in the pathogenesis of ulcerative colitis (UC). It seems that anti-TNF-α therapy is beneficial in the treatment of UC. The aim was to assess the effectiveness of Infliximab and Adalimamab with UC compared with con- ventional therapy. The Pubmed and Embase databases were searched for studies investigating the efficacy of infliximab and adalimumab on UC. Infliximab had a statistically significant effects in induction of clinical response (RR = 1.67; 95% CI 1.12 to 2.50) of UC compared with conventional therapy, but those had not a statistically significant effects in clinical remission (RR = 1.63; 95% CI 0.84 to 3.18) and reduction of colectomy rate (RR = 0.54; 95% CI 0.26 to 1.12) of UC. And adalimumab had a statistically significant effects in induction of clinical remission (RR =1.82; 95% CI 1.24 to 2.67) and clinical response (RR =1.36; 95% CI 1.13 to 1.64) of UC compared with conventional therapy. Our meta-analyses suggested that Infliximab had a statistically significant effects in induction of clinical response of UC compared with conventional therapy and adalimumab had a statistically significant effects in induction of clinical remission and clinical response of UC compared with conventional therapy.
NASA Astrophysics Data System (ADS)
Letu, H.; Ishimoto, H.; Riedi, J.; Nakajima, T. Y.; -Labonnote, L. C.; Baran, A. J.; Nagao, T. M.; Skiguchi, M.
2015-11-01
Various ice particle habits are investigated in conjunction with inferring the optical properties of ice cloud for the Global Change Observation Mission-Climate (GCOM-C) satellite program. A database of the single-scattering properties of five ice particle habits, namely, plates, columns, droxtals, bullet-rosettes, and Voronoi, is developed. The database is based on the specification of the Second Generation Global Imager (SGLI) sensor onboard the GCOM-C satellite, which is scheduled to be launched in 2017 by Japan Aerospace Exploration Agency (JAXA). A combination of the finite-difference time-domain (FDTD) method, Geometric Optics Integral Equation (GOIE) technique, and geometric optics method (GOM) are applied to compute the single-scattering properties of the selected ice particle habits at 36 wavelengths, from the visible-to-infrared spectral region, covering the SGLI channels for the size parameter, which is defined with respect to the equivalent-volume radius sphere, which ranges between 6 and 9000. The database includes the extinction efficiency, absorption efficiency, average geometrical cross-section, single-scattering albedo, asymmetry factor, size parameter of an equivalent volume sphere, maximum distance from the center of mass, particle volume, and six non-zero elements of the scattering phase matrix. The characteristics of the calculated extinction efficiency, single-scattering albedo, and asymmetry factor of the five ice particle habits are compared. Furthermore, the optical thickness and spherical albedo of ice clouds using the five ice particle habit models are retrieved from the Polarization and Directionality of the Earth's Reflectances-3 (POLDER-3) measurements on board the Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar (PARASOL). The optimal ice particle habit for retrieving the SGLI ice cloud properties was investigated by adopting the spherical albedo difference (SAD) method. It is found that the SAD, for bullet-rosette particle, with radii of equivalent volume spheres (r~) ranging between 6 to 10 μm, and the Voronoi particle, with r~ ranging between 28 to 38 μm, and 70 to 100 μm, is distributed stably as the scattering angle increases. It is confirmed that the SAD of small bullet rosette and all sizes of voronoi particles has a low angular dependence, indicating that the combination of the bullet-rosette and Voronoi models are sufficient for retrieval of the ice cloud spherical albedo and optical thickness as an effective habit models of the SGLI sensor. Finally, SAD analysis based on the Voronoi habit model with moderate particles (r~ = 30 μm) is compared to the conventional General Habit Mixture (GHM), Inhomogeneous Hexagonal Monocrystal (IHM), 5-plate aggregate and ensemble ice particle model. It is confirmed that the Voronoi habit model has an effect similar to the counterparts of some conventional models on the retrieval of ice cloud properties from space-borne radiometric observations.
Levy, C.; Beauchamp, C.
1996-01-01
This poster describes the methods used and working prototype that was developed from an abstraction of the relational model from the VA's hierarchical DHCP database. Overlaying the relational model on DHCP permits multiple user views of the physical data structure, enhances access to the database by providing a link to commercial (SQL based) software, and supports a conceptual managed care data model based on primary and longitudinal patient care. The goal of this work was to create a relational abstraction of the existing hierarchical database; to construct, using SQL data definition language, user views of the database which reflect the clinical conceptual view of DHCP, and to allow the user to work directly with the logical view of the data using GUI based commercial software of their choosing. The workstation is intended to serve as a platform from which a managed care information model could be implemented and evaluated.
SQL/NF Translator for the Triton Nested Relational Database System
1990-12-01
18as., Ohio .. 9~~ ~~ 1 4- AFIT/GCE/ENG/90D-05 SQL/Nk1 TRANSLATOR FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Craig William Schnepf Captain...FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technnlogy... systems . The SQL/NF query language used for the nested relationil model is an extension of the popular relational model query language SQL. The query
Park, Ju Young; Woo, Chung Hee; Yoo, Jae Yong
2016-06-01
This study was conducted to identify the educational effects of a blended e-learning program for graduating nursing students on self-efficacy, problem solving, and psychomotor skills for core basic nursing skills. A one-group pretest/posttest quasi-experimental design was used with 79 nursing students in Korea. The subjects took a conventional 2-week lecture-based practical course, together with spending an average of 60 minutes at least twice a week during 2 weeks on the self-guided e-learning content for basic cardiopulmonary resuscitation and defibrillation using Mosby's Nursing Skills database. Self- and examiner-reported data were collected between September and November 2014 and analyzed using descriptive statistics, paired t test, and Pearson correlation. The results showed that subjects who received blended e-learning education had improved problem-solving abilities (t = 2.654) and self-efficacy for nursing practice related to cardiopulmonary resuscitation and defibrillation (t = 3.426). There was also an 80% to 90% rate of excellent postintervention performance for the majority of psychomotor skills, but the location of chest compressions, compression rate per minute, artificial respiration, and verification of patient outcome still showed low levels of performance. In conclusion, blended E-learning, which allows self-directed repetitive learning, may be more effective in enhancing nursing competencies than conventional practice education.
Bee venom acupuncture for rheumatoid arthritis: a systematic review of randomised clinical trials.
Lee, Ju Ah; Son, Mi Ju; Choi, Jiae; Jun, Ji Hee; Kim, Jong-In; Lee, Myeong Soo
2014-11-07
To assess the clinical evidence for bee venom acupuncture (BVA) for rheumatoid arthritis (RA). Systematic review of randomised controlled trials (RCTs). We searched 14 databases up to March 2014 without a language restriction. Patients with RA. BVA involved injecting purified, diluted BV into acupoints. We included trials on BVA used alone or in combination with a conventional therapy versus the conventional therapy alone. Morning stiffness, pain and joint swelling Erythrocyte sedimentation rate (ESR), C reactive protein (CRP), rheumatoid factor, the number of joints affected by RA and adverse effects likely related to RA. A total of 304 potentially relevant studies were identified; only one RCT met our inclusion criteria. Compared with placebo, BVA may more effectively improve joint pain, swollen joint counts, tender joint counts, ESR and CRP but was not shown to improve morning stiffness. There is low-quality evidence, based on one trial, that BVA can significantly reduce pain, morning stiffness, tender joint counts, swollen joint counts and improve the quality of life of patients with RA compared with placebo (normal saline injection) control. However, the number of trials, their quality and the total sample size were too low to draw firm conclusions. PROSPERO 2013: CRD42013005853. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Green tea polyphenol epigallocatechin-3-gallate (EGCG) as adjuvant in cancer therapy.
Lecumberri, Elena; Dupertuis, Yves Marc; Miralbell, Raymond; Pichard, Claude
2013-12-01
Green tea catechins, especially epigallocatechin-3-gallate (EGCG), have been associated with cancer prevention and treatment. This has resulted in an increased number of studies evaluating the effects derived from the use of this compound in combination with chemo/radiotherapy. This review aims at compiling latest literature on this subject. Keywords including EGCG, cancer, chemotherapy, radiotherapy and side effects, were searched using PubMed and ScienceDirect databases to identify, analyze, and summarize the research literature on this topic. Most of the studies on this subject up to date are preclinical. Relevance of the findings, impact factor, and date of publication were critical parameters for the studies to be included in the review. Additive and synergistic effects of EGCG when combined with conventional cancer therapies have been proposed, and its anti-inflammatory and antioxidant activities have been related to amelioration of cancer therapy side effects. However, antagonistic interactions with certain anticancer drugs might limit its clinical use. The use of EGCG could enhance the effect of conventional cancer therapies through additive or synergistic effects as well as through amelioration of deleterious side effects. Further research, especially at the clinical level, is needed to ascertain the potential role of EGCG as adjuvant in cancer therapy. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Using a Semi-Realistic Database to Support a Database Course
ERIC Educational Resources Information Center
Yue, Kwok-Bun
2013-01-01
A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Elisa K.; Woods, Ryan; McBride, Mary L.
Purpose: The risk of cardiac injury with hypofractionated whole-breast/chest wall radiation therapy (HF-WBI) compared with conventional whole-breast/chest wall radiation therapy (CF-WBI) in women with left-sided breast cancer remains a concern. The purpose of this study was to determine if there is an increase in hospital-related morbidity from cardiac causes with HF-WBI relative to CF-WBI. Methods and Materials: Between 1990 and 1998, 5334 women ≤80 years of age with early-stage breast cancer were treated with postoperative radiation therapy to the breast or chest wall alone. A population-based database recorded baseline patient, tumor, and treatment factors. Hospital administrative records identified baseline cardiacmore » risk factors and other comorbidities. Factors between radiation therapy groups were balanced using a propensity-score model. The first event of a hospital admission for cardiac causes after radiation therapy was determined from hospitalization records. Ten- and 15-year cumulative hospital-related cardiac morbidity after radiation therapy was estimated for left- and right-sided cases using a competing risk approach. Results: The median follow-up was 13.2 years. For left-sided cases, 485 women were treated with CF-WBI, and 2221 women were treated with HF-WBI. Mastectomy was more common in the HF-WBI group, whereas boost was more common in the CF-WBI group. The CF-WBI group had a higher prevalence of diabetes. The 15-year cumulative hospital-related morbidity from cardiac causes (95% confidence interval) was not different between the 2 radiation therapy regimens after propensity-score adjustment: 21% (19-22) with HF-WBI and 21% (17-25) with CF-WBI (P=.93). For right-sided cases, the 15-year cumulative hospital-related morbidity from cardiac causes was also similar between the radiation therapy groups (P=.76). Conclusions: There is no difference in morbidity leading to hospitalization from cardiac causes among women with left-sided early-stage breast cancer treated with HF-WBI or CF-WBI at 15-year follow-up.« less
Starbase Data Tables: An ASCII Relational Database for Unix
NASA Astrophysics Data System (ADS)
Roll, John
2011-11-01
Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Mars entry guidance based on an adaptive reference drag profile
NASA Astrophysics Data System (ADS)
Liang, Zixuan; Duan, Guangfei; Ren, Zhang
2017-08-01
The conventional Mars entry tracks a fixed reference drag profile (FRDP). To improve the landing precision, a novel guidance approach that utilizes an adaptive reference drag profile (ARDP) is presented. The entry flight is divided into two phases. For each phase, a family of drag profiles corresponding to various trajectory lengths is planned. Two update windows are investigated for the reference drag profile. At each window, the ARDP is selected online from the profile database according to the actual range-to-go. The tracking law for the selected drag profile is designed based on the feedback linearization. Guidance approaches using the ARDP and the FRDP are then tested and compared. Simulation results demonstrate that the proposed ARDP approach achieves much higher guidance precision than the conventional FRDP approach.
Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate
DBGC: A Database of Human Gastric Cancer
Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan
2015-01-01
The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288
ERIC Educational Resources Information Center
Friedman, Debra; Hoffman, Phillip
2001-01-01
Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... construct a database of regional small businesses that currently or may in the future participate in DOT direct and DOT funded transportation related contracts, and make this database available to OSDBU, upon request. 2. Utilize the database of regional transportation-related small businesses to match...
1989-02-01
which capture the knowledge of such experts. These Expert Systems, or Knowledge-Based Systems’, differ from the usual computer programming techniques...their applications in the fields of structural design and welding is reviewed. 5.1 Introduction Expert Systems, or KBES, are computer programs using Al...procedurally constructed as conventional computer programs usually are; * The knowledge base of such systems is executable, unlike databases 3 "Ill
Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal
1997-06-01
taken. We propose to join both these technologies together in a registration device . The registration device would be small and portable and easily...registering the panning of the camera (or other sensing device ) and also stitch together the shots to automatically generate panoramic files necessary to...database and as the base information changes each of the linked drawings is automatically updated. Filename Format A specific naming convention should be
Gaussian Random Fields Methods for Fork-Join Network with Synchronization Constraints
2014-12-22
substantial efforts were dedicated to the study of the max-plus recursions [21, 3, 12]. More recently, Atar et al. [2] have studied a fork-join...feedback and NES, Atar et al. [2] show that a dynamic priority discipline achieves throughput optimal- ity asymptotically in the conventional heavy...2011) Patient flow in hospitals: a data-based queueing-science perspective. Submitted to Stochastic Systems, 20. [2] R. Atar , A. Mandelbaum and A
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
ERIC Educational Resources Information Center
Lynch, Clifford A.
1991-01-01
Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…
Hammitt, J K
1990-09-01
Consumer choice between organically (without pesticides) and conventionally grown produce is examined. Exploratory focus-group discussions and questionnaires (N = 43) suggest that individuals who purchase organically grown produce believe it is substantially less hazardous than the conventional alternative and are willing to pay significant premiums to obtain it (a median 50% above the cost of conventional produce). The value of risk reduction implied by this incremental willingness to pay is not high relative to estimates for other risks, since the perceived risk reduction is relatively large. Organic-produce consumers also appear more likely than conventional-produce consumers to mitigate other ingestion-related risks (e.g., contaminated drinking water) but less likely to use automobile seatbelts.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases
Truong, Kevin; Ikura, Mitsuhiko
2003-01-01
Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020
Sex Differences in Correlates of Abortion Attitudes among College Students.
ERIC Educational Resources Information Center
Finlay, Barbara Agresti
1981-01-01
Data from a sample of students showed that males' abortion attitudes are related primarily to their degree of conventionality; females' abortion attitudes are related to sex-role conventionality, the value of children in their life plans, the "right to life" issue, and sexual and general conventionality. (Author)
Microbial properties database editor tutorial
USDA-ARS?s Scientific Manuscript database
A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbialrelevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pathogens. Physical prop...
NATIVE HEALTH DATABASES: NATIVE HEALTH RESEARCH DATABASE (NHRD)
The Native Health Databases contain bibliographic information and abstracts of health-related articles, reports, surveys, and other resource documents pertaining to the health and health care of American Indians, Alaska Natives, and Canadian First Nations. The databases provide i...
Microbial Properties Database Editor Tutorial
A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbial-relevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pat...
A new relational database structure and online interface for the HITRAN database
NASA Astrophysics Data System (ADS)
Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan
2013-11-01
A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.
New tools and methods for direct programmatic access to the dbSNP relational database
Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260
MIPS: analysis and annotation of proteins from whole genomes in 2005
Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839
MIPS: analysis and annotation of proteins from whole genomes in 2005.
Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).
A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories
NASA Astrophysics Data System (ADS)
Brown, Christa L.
National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
MSD-MAP: A Network-Based Systems Biology Platform for Predicting Disease-Metabolite Links.
Wathieu, Henri; Issa, Naiem T; Mohandoss, Manisha; Byers, Stephen W; Dakshanamurthy, Sivanesan
2017-01-01
Cancer-associated metabolites result from cell-wide mechanisms of dysregulation. The field of metabolomics has sought to identify these aberrant metabolites as disease biomarkers, clues to understanding disease mechanisms, or even as therapeutic agents. This study was undertaken to reliably predict metabolites associated with colorectal, esophageal, and prostate cancers. Metabolite and disease biological action networks were compared in a computational platform called MSD-MAP (Multi Scale Disease-Metabolite Association Platform). Using differential gene expression analysis with patient-based RNAseq data from The Cancer Genome Atlas, genes up- or down-regulated in cancer compared to normal tissue were identified. Relational databases were used to map biological entities including pathways, functions, and interacting proteins, to those differential disease genes. Similar relational maps were built for metabolites, stemming from known and in silico predicted metabolite-protein associations. The hypergeometric test was used to find statistically significant relationships between disease and metabolite biological signatures at each tier, and metabolites were assessed for multi-scale association with each cancer. Metabolite networks were also directly associated with various other diseases using a disease functional perturbation database. Our platform recapitulated metabolite-disease links that have been empirically verified in the scientific literature, with network-based mapping of jointly-associated biological activity also matching known disease mechanisms. This was true for colorectal, esophageal, and prostate cancers, using metabolite action networks stemming from both predicted and known functional protein associations. By employing systems biology concepts, MSD-MAP reliably predicted known cancermetabolite links, and may serve as a predictive tool to streamline conventional metabolomic profiling methodologies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Wu, Jia-rui; Zhang, Xiao-meng; Zhang, Bing
2015-11-01
To systematically evaluate the clinical efficacy and safety of Potassium Dehydroandrographolide Succinate Injection (PDSI) in the treatment of child epidemic parotitis (EP). Randomized controlled trials (RCTs) regarding PDSI in the treatment of child EP were searched in China National Knowledge Infrastructure, Wanfang Database, Chinese Biomedical Literature Database, PubMed, and Cochrane Library from inception to July 30, 2013. Two reviewers independently retrieved RCTs and extracted information. The Cochrane risk of bias method was used to assess the quality of included studies, and a meta-analysis was conducted with RevMan 5.2 software. A total of 11 studies with 818 participants were included. The quality of the studies was generally low, among which only one study mentioned the random method. The meta-analysis indicated that PDSI was more effective than the conventional therapy with Western medicine for EP in the outcomes of the total effective rate [relative risk (RR)=1.23, 95% confidence interval (CI) [1.14, 1.33], P<0.01], the time of temperature return to normal, the time of detumescence [mean difference (MD)=-2.10, 95% CI [-2.78,-1.41], P<0.01], and the incidence of complications (RR=0.14, 95% CI [0.03, 0.72], P=0.02). There were 6 adverse drug reactions (ADRs) in this systematic review, 2 of which were mainly represented rash and diarrhea in the experiment group, while another 4 ADRs occurred in the control group. Based on the systematic review, PDSI was effectiveness and relatively safety in the treatment of child EP. But further rigorously designed trials are warranted to determine its effectiveness.
Using a Relational Database to Index Infectious Disease Information
Brown, Jay A.
2010-01-01
Mapping medical knowledge into a relational database became possible with the availability of personal computers and user-friendly database software in the early 1990s. To create a database of medical knowledge, the domain expert works like a mapmaker to first outline the domain and then add the details, starting with the most prominent features. The resulting “intelligent database” can support the decisions of healthcare professionals. The intelligent database described in this article contains profiles of 275 infectious diseases. Users can query the database for all diseases matching one or more specific criteria (symptom, endemic region of the world, or epidemiological factor). Epidemiological factors include sources (patients, water, soil, or animals), routes of entry, and insect vectors. Medical and public health professionals could use such a database as a decision-support software tool. PMID:20623018
Pai, Vishwas D.; Engineer, Reena; Patil, Prachi S.; Arya, Supreeta; Desouza, Ashwin L.
2016-01-01
Background To compare extra levator abdomino perineal resection (ELAPER) with conventional abdominoperineal resection (APER) in terms of short-term oncological and clinical outcomes. Methods This is a retrospective review of a prospectively maintained database including all the patients of rectal cancer who underwent APER at Tata Memorial Center between July 1, 2013, and January 31, 2015. Short-term oncological parameters evaluated included circumferential resection margin involvement (CRM), tumor site perforation, and number of nodes harvested. Peri operative outcomes included blood loss, length of hospital stay, postoperative perineal wound complications, and 30-day mortality. The χ2-test was used to compare the results between the two groups. Results Forty-two cases of ELAPER and 78 cases of conventional APER were included in the study. Levator involvement was significantly higher in the ELAPER compared with the conventional group; otherwise, the two groups were comparable in all the aspects. CRM involvement was seen in seven patients (8.9%) in the conventional group compared with three patients (7.14%) in the ELAPER group. Median hospital stay was significantly longer with ELAPER. The univariate analysis of the factors influencing CRM positivity did not show any significance. Conclusions ELAPER should be the preferred approach for low rectal tumors with involvement of levators. For those cases in which levators are not involved, as shown in preoperative magnetic resonance imaging (MRI), the current evidence is insufficient to recommend ELAPER over conventional APER. This stresses the importance of preoperative MRI in determining the best approach for an individual patient. PMID:27284466
Luijendijk, Hendrika J; de Bruin, Niels C; Hulshof, Tessa A; Koolman, Xander
2016-02-01
Numerous large observational studies have shown an increased risk of mortality in elderly users of conventional antipsychotics. Health authorities have warned against use of these drugs. However, terminal illness is a potentially strong confounder of the observational findings. So, the objective of this study was to systematically assess whether terminal illness may have biased the observational association between conventional antipsychotics and risk of mortality in elderly patients. Studies were searched in PubMed, CINAHL, Embase, the references of selected studies and articles referring to selected studies (Web of Science). Inclusion criteria were (i) observational studies that estimated (ii) the risk of all-cause mortality in (iii) new elderly users of (iv) conventional antipsychotics compared with atypical antipsychotics or no use. Two investigators assessed the characteristics of the exposure and reference groups, main results, measured confounders and methods used to adjust for unmeasured confounders. We identified 21 studies. All studies were based on administrative medical and pharmaceutical databases. Sicker and older patients received conventional antipsychotics more often than new antipsychotics. The risk of dying was especially high in the first month of use, and when haloperidol was administered per injection or in high doses. Terminal illness was not measured in any study. Instrumental variables that were used were also confounded by terminal illness. We conclude that terminal illness has not been adjusted for in observational studies that reported an increased risk of mortality risk in elderly users of conventional antipsychotics. As the validity of the evidence is questionable, so is the warning based on it. Copyright © 2015 John Wiley & Sons, Ltd.
The Use of a Relational Database in Qualitative Research on Educational Computing.
ERIC Educational Resources Information Center
Winer, Laura R.; Carriere, Mario
1990-01-01
Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…
A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*
Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing
2011-01-01
Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108
NASA Astrophysics Data System (ADS)
Bowen, E.; Martin, P. A.; Eshel, G.
2008-12-01
The adverse environmental effects, especially energy use and resultant GHG emissions, of food production and consumption are becoming more widely appreciated and increasingly well documented. Our insights into the thorny problem of how to mitigate some of those effects, however, are far less evolved. Two of the most commonly advocated strategies are "organic" and "local", referring, respectively, to growing food without major inputs of fossil fuel based synthetic fertilizers and pesticides and to food consumption near its agricultural origin. Indeed, both agrochemical manufacture and transportation of produce to market make up a significant percentage of energy use in agriculture. While there can be unique environmental benefits to each strategy, "organic" and "local" each may potentially result in energy and emissions savings relative to conventionally grown produce. Here, we quantify the potential energy and greenhouse gas emissions savings associated with "organic" and "local". We take note of energy use and actual GHG costs of the major synthetic fertilizers and transportation by various modes routinely employed in agricultural distribution chains, and compare them for ~35 frequently consumed nutritional mainstays. We present new, current, lower-bound energy and greenhouse gas efficiency estimates for these items and compare energy consumption and GHG emissions incurred during producing those food items to consumption and emissions resulting from transporting them, considering travel distances ranging from local to continental and transportation modes ranging from (most efficient) rail to (least efficient) air. In performing those calculations, we demonstrate the environmental superiority of either local or organic over conventional foods, and illuminate the complexities involved in entertaining the timely yet currently unanswered, and previously unanswerable, question of "Which is Environmentally Superior, Organic or Local?". More broadly, we put forth a database that amounts to a general blueprint for rigorous comparative evaluation of any competing diets.
Hu, Xiao-Yang; Chen, Ni-Ni; Chai, Qian-Yun; Yang, Guo-Yan; Trevelyan, Esmé; Lorenc, Ava; Liu, Jian-Ping; Robinson, Nicola
2015-10-26
Low back pain (LBP) is a common musculoskeletal condition often treated using integrative medicine (IM). Most reviews have focused on a single complementary and alternative medicine (CAM) therapy for LBP rather than evaluating wider integrative approaches. This exploratory systematic review aimed to identify randomized controlled trials (RCTs) and provide evidence on the effectiveness, cost effectiveness and adverse effects of integrative treatment for LBP. A literature search was conducted in 12 English and Chinese databases. RCTs evaluating an integrative treatment for musculoskeletal related LBP were included. Reporting, methodological quality and relevant clinical characteristics were assessed and appraised. Metaanalyses were performed for outcomes where trials were sufficiently homogenous. Fifty-six RCTs were identified evaluating integrative treatment for LBP. Although reporting and methodological qualities were poor, meta-analysis showed a favourable effect for integrative treatment over conventional and CAM treatment for back pain and function at 3 months or less follow-up. Two trials investigated costs, reporting £ 5332 per quality adjusted life years with 6 Alexander technique lessons plus exercise at 12 months follow-up; and an increased total costs of $244 when giving an additional up to 15 sessions of CAM package of care at 12 weeks. Sixteen trials mentioned safety; no severe adverse effects were reported. Integrative treatment that combines CAM with conventional therapies appeared to have beneficial effects on pain and function. However, evidence is limited due to heterogeneity, the relatively small numbers available for subgroup analyses and the low methodological quality of the included trials. Identification of studies of true IM was not possible due to lack of reporting of the intervention details (registration No. CRD42013003916).
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
8 CFR 204.306 - Classification as an immediate relative based on a Convention adoption.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Classification as an immediate relative based on a Convention adoption. 204.306 Section 204.306 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS IMMIGRANT PETITIONS Intercountry Adoption of a Convention Adoptee § 204...
ERIC Educational Resources Information Center
Thibodeau, Paul; Durgin, Frank H.
2008-01-01
Three experiments explored whether conceptual mappings in conventional metaphors are productive, by testing whether the comprehension of novel metaphors was facilitated by first reading conceptually related conventional metaphors. The first experiment, a replication and extension of Keysar et al. [Keysar, B., Shen, Y., Glucksberg, S., Horton, W.…
22 CFR 98.2 - Preservation of Convention records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Preservation of Convention records. 98.2...-CONVENTION RECORD PRESERVATION § 98.2 Preservation of Convention records. Once the Convention has entered into force for the United States, the Secretary and DHS will preserve, or require the preservation of...
Owens, John
2009-01-01
Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank
2005-05-01
Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.
State analysis requirements database for engineering complex embedded systems
NASA Technical Reports Server (NTRS)
Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.
A carcinogenic potency database of the standardized results of animal bioassays
Gold, Lois Swirsky; Sawyer, Charles B.; Magaw, Renae; Backman, Georganne M.; De Veciana, Margarita; Levinson, Robert; Hooper, N. Kim; Havender, William R.; Bernstein, Leslie; Peto, Richard; Pike, Malcolm C.; Ames, Bruce N.
1984-01-01
The preceding paper described our numerical index of carcinogenic potency, the TD50 and the statistical procedures adopted for estimating it from experimental data. This paper presents the Carcinogenic Potency Database, which includes results of about 3000 long-term, chronic experiments of 770 test compounds. Part II is a discussion of the sources of our data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. Part III is a guide to the plot of results presented in Part IV. A number of appendices are provided to facilitate use of the database. The plot includes information about chronic cancer tests in mammals, such as dose and other aspects of experimental protocol, histopathology and tumor incidence, TD50 and its statistical significance, dose response, author's opinion and literature reference. The plot readily permits comparisons of carcinogenic potency and many other aspects of cancer tests; it also provides quantitative information about negative tests. The range of carcinogenic potency is over 10 million-fold. PMID:6525996
Mansel, Charlotte; Davies, Sharon
2012-10-01
There are currently over 250,000 children between the ages of 10 and 18 years who have their genetic information stored on the National DNA Database. This paper explores the legal and ethical issues surrounding this controversial subject, with particular focus on juvenile capacity and the potential results of criminalizing young children and adolescents. The implications of the adverse legal judgement of the European Court of Human Rights in S and Marper v UK (2008) and the violation of Article 8 of the Convention are discussed. The authors have considered the requirement to balance the rights of the individual, particularly those of minors, against the need to protect the public and have compared the position in Scotland to that of the rest of the UK. The authors conclude that a more ethically acceptable alternative could be the creation of a separate forensic database for children aged 10-18 years, set up to safeguard the interests of those who have not been convicted of any crime.
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
Milheim, L. E.; Slonecker, E. T.; Roig-Silva, C. M.; Winters, S. G.; Ballew, J. R.
2014-01-01
Increased demands for cleaner burning energy, coupled with the relatively recent technological advances in accessing hydrocarbon-rich geologic formations, have led to an intense effort to find and extract unconventional natural gas from various underground sources around the country. One of these sources, the Marcellus Shale, located in the Allegheny Plateau, is currently undergoing extensive drilling and production. The technology used to extract gas in the Marcellus Shale is known as hydraulic fracturing and has garnered much attention because of its use of large amounts of fresh water, its use of proprietary fluids for the hydraulic-fracturing process, its potential to release contaminants into the environment, and its potential effect on water resources. Nonetheless, development of natural gas extraction wells in the Marcellus Shale is only part of the overall natural gas story in this area of Pennsylvania. Conventional natural gas wells, which sometimes use the same technique for extraction, are commonly located in the same general area as the Marcellus Shale and are frequently developed in clusters across the landscape. The combined effects of these two natural gas extraction methods create potentially serious patterns of disturbance on the landscape. This document quantifies the landscape changes and consequences of natural gas extraction for Cameron, Clarion, Elk, Forest, Jefferson, McKean, Potter, and Warren Counties in Pennsylvania between 2004 and 2010. Patterns of landscape disturbance related to natural gas extraction activities were collected and digitized using National Agriculture Imagery Program (NAIP) imagery for 2004, 2005/2006, 2008, and 2010. The disturbance patterns were then used to measure changes in land cover and land use using the National Land Cover Database (NLCD) of 2001. A series of landscape metrics is also used to quantify these changes and is included in this publication. In this region, natural gas and oil development disturbed approximately 5,255 hectares (ha) (conventional, 2,400 ha; Marcellus, 357 ha; and oil, 1,883 ha) of land of which 3,507 ha were forested land and 610 ha were agricultural land. Eighty percent of that total disturbance was from conventional natural gas and oil development.
Kabekkodu, Soorya N; Faber, John; Fawcett, Tim
2002-06-01
The International Centre for Diffraction Data (ICDD) is responding to the changing needs in powder diffraction and materials analysis by developing the Powder Diffraction File (PDF) in a very flexible relational database (RDB) format. The PDF now contains 136,895 powder diffraction patterns. In this paper, an attempt is made to give an overview of the PDF-4, search/match methods and the advantages of having the PDF-4 in RDB format. Some case studies have been carried out to search for crystallization trends, properties, frequencies of space groups and prototype structures. These studies give a good understanding of the basic structural aspects of classes of compounds present in the database. The present paper also reports data-mining techniques and demonstrates the power of a relational database over the traditional (flat-file) database structures.
Wiley, Laura K.; Sivley, R. Michael; Bush, William S.
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist PMID:23894185
Wiley, Laura K; Sivley, R Michael; Bush, William S
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
Migration of legacy mumps applications to relational database servers.
O'Kane, K C
2001-07-01
An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
Vivar, Juan C; Pemu, Priscilla; McPherson, Ruth; Ghosh, Sujoy
2013-08-01
Abstract Unparalleled technological advances have fueled an explosive growth in the scope and scale of biological data and have propelled life sciences into the realm of "Big Data" that cannot be managed or analyzed by conventional approaches. Big Data in the life sciences are driven primarily via a diverse collection of 'omics'-based technologies, including genomics, proteomics, metabolomics, transcriptomics, metagenomics, and lipidomics. Gene-set enrichment analysis is a powerful approach for interrogating large 'omics' datasets, leading to the identification of biological mechanisms associated with observed outcomes. While several factors influence the results from such analysis, the impact from the contents of pathway databases is often under-appreciated. Pathway databases often contain variously named pathways that overlap with one another to varying degrees. Ignoring such redundancies during pathway analysis can lead to the designation of several pathways as being significant due to high content-similarity, rather than truly independent biological mechanisms. Statistically, such dependencies also result in correlated p values and overdispersion, leading to biased results. We investigated the level of redundancies in multiple pathway databases and observed large discrepancies in the nature and extent of pathway overlap. This prompted us to develop the application, ReCiPa (Redundancy Control in Pathway Databases), to control redundancies in pathway databases based on user-defined thresholds. Analysis of genomic and genetic datasets, using ReCiPa-generated overlap-controlled versions of KEGG and Reactome pathways, led to a reduction in redundancy among the top-scoring gene-sets and allowed for the inclusion of additional gene-sets representing possibly novel biological mechanisms. Using obesity as an example, bioinformatic analysis further demonstrated that gene-sets identified from overlap-controlled pathway databases show stronger evidence of prior association to obesity compared to pathways identified from the original databases.
Gil, F; Hernández, A F
2015-06-01
Human biomonitoring has become an important tool for the assessment of internal doses of metallic and metalloid elements. These elements are of great significance because of their toxic properties and wide distribution in environmental compartments. Although blood and urine are the most used and accepted matrices for human biomonitoring, other non-conventional samples (saliva, placenta, meconium, hair, nails, teeth, breast milk) may have practical advantages and would provide additional information on health risk. Nevertheless, the analysis of these compounds in biological matrices other than blood and urine has not yet been accepted as a useful tool for biomonitoring. The validation of analytical procedures is absolutely necessary for a proper implementation of non-conventional samples in biomonitoring programs. However, the lack of reliable and useful analytical methodologies to assess exposure to metallic elements, and the potential interference of external contamination and variation in biological features of non-conventional samples are important limitations for setting health-based reference values. The influence of potential confounding factors on metallic concentration should always be considered. More research is needed to ascertain whether or not non-conventional matrices offer definitive advantages over the traditional samples and to broaden the available database for establishing worldwide accepted reference values in non-exposed populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Intraoral distalizer effects with conventional and skeletal anchorage: a meta-analysis.
Grec, Roberto Henrique da Costa; Janson, Guilherme; Branco, Nuria Castello; Moura-Grec, Patrícia Garcia; Patel, Mayara Paim; Castanha Henriques, José Fernando
2013-05-01
The aims of this meta-analysis were to quantify and to compare the amounts of distalization and anchorage loss of conventional and skeletal anchorage methods in the correction of Class II malocclusion with intraoral distalizers. The literature was searched through 5 electronic databases, and inclusion criteria were applied. Articles that presented pretreatment and posttreatment cephalometric values were preferred. Quality assessments of the studies were performed. The averages and standard deviations of molar and premolar effects were extracted from the studies to perform a meta-analysis. After applying the inclusion and exclusion criteria, 40 studies were included in the systematic review. After the quality analysis, 2 articles were classified as high quality, 27 as medium quality, and 11 as low quality. For the meta-analysis, 6 studies were included, and they showed average molar distalization amounts of 3.34 mm with conventional anchorage and 5.10 mm with skeletal anchorage. The meta-analysis of premolar movement showed estimates of combined effects of 2.30 mm (mesialization) in studies with conventional anchorage and -4.01 mm (distalization) in studies with skeletal anchorage. There was scientific evidence that both anchorage systems are effective for distalization; however, with skeletal anchorage, there was no anchorage loss when direct anchorage was used. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Relational Data Bases--Are You Ready?
ERIC Educational Resources Information Center
Marshall, Dorothy M.
1989-01-01
Migrating from a traditional to a relational database technology requires more than traditional project management techniques. An overview of what to consider before migrating to relational database technology is presented. Leadership, staffing, vendor support, hardware, software, and application development are discussed. (MLW)
Constructing a Graph Database for Semantic Literature-Based Discovery.
Hristovski, Dimitar; Kastrin, Andrej; Dinevski, Dejan; Rindflesch, Thomas C
2015-01-01
Literature-based discovery (LBD) generates discoveries, or hypotheses, by combining what is already known in the literature. Potential discoveries have the form of relations between biomedical concepts; for example, a drug may be determined to treat a disease other than the one for which it was intended. LBD views the knowledge in a domain as a network; a set of concepts along with the relations between them. As a starting point, we used SemMedDB, a database of semantic relations between biomedical concepts extracted with SemRep from Medline. SemMedDB is distributed as a MySQL relational database, which has some problems when dealing with network data. We transformed and uploaded SemMedDB into the Neo4j graph database, and implemented the basic LBD discovery algorithms with the Cypher query language. We conclude that storing the data needed for semantic LBD is more natural in a graph database. Also, implementing LBD discovery algorithms is conceptually simpler with a graph query language when compared with standard SQL.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false [Reserved] 99.3 Section 99.3 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES REPORTING ON CONVENTION AND NON-CONVENTION ADOPTIONS OF EMIGRATING CHILDREN § 99.3 [Reserved] ...
SIDD: A Semantically Integrated Database towards a Global View of Human Disease
Cheng, Liang; Wang, Guohua; Li, Jie; Zhang, Tianjiao; Xu, Peigang; Wang, Yadong
2013-01-01
Background A number of databases have been developed to collect disease-related molecular, phenotypic and environmental features (DR-MPEs), such as genes, non-coding RNAs, genetic variations, drugs, phenotypes and environmental factors. However, each of current databases focused on only one or two DR-MPEs. There is an urgent demand to develop an integrated database, which can establish semantic associations among disease-related databases and link them to provide a global view of human disease at the biological level. This database, once developed, will facilitate researchers to query various DR-MPEs through disease, and investigate disease mechanisms from different types of data. Methodology To establish an integrated disease-associated database, disease vocabularies used in different databases are mapped to Disease Ontology (DO) through semantic match. 4,284 and 4,186 disease terms from Medical Subject Headings (MeSH) and Online Mendelian Inheritance in Man (OMIM) respectively are mapped to DO. Then, the relationships between DR-MPEs and diseases are extracted and merged from different source databases for reducing the data redundancy. Conclusions A semantically integrated disease-associated database (SIDD) is developed, which integrates 18 disease-associated databases, for researchers to browse multiple types of DR-MPEs in a view. A web interface allows easy navigation for querying information through browsing a disease ontology tree or searching a disease term. Furthermore, a network visualization tool using Cytoscape Web plugin has been implemented in SIDD. It enhances the SIDD usage when viewing the relationships between diseases and DR-MPEs. The current version of SIDD (Jul 2013) documents 4,465,131 entries relating to 139,365 DR-MPEs, and to 3,824 human diseases. The database can be freely accessed from: http://mlg.hit.edu.cn/SIDD. PMID:24146757
van Baal, Sjozef; Kaimakis, Polynikis; Phommarinh, Manyphong; Koumbi, Daphne; Cuppens, Harry; Riccardino, Francesca; Macek, Milan; Scriver, Charles R; Patrinos, George P
2007-01-01
Frequency of INherited Disorders database (FINDbase) (http://www.findbase.org) is a relational database, derived from the ETHNOS software, recording frequencies of causative mutations leading to inherited disorders worldwide. Database records include the population and ethnic group, the disorder name and the related gene, accompanied by links to any corresponding locus-specific mutation database, to the respective Online Mendelian Inheritance in Man entries and the mutation together with its frequency in that population. The initial information is derived from the published literature, locus-specific databases and genetic disease consortia. FINDbase offers a user-friendly query interface, providing instant access to the list and frequencies of the different mutations. Query outputs can be either in a table or graphical format, accompanied by reference(s) on the data source. Registered users from three different groups, namely administrator, national coordinator and curator, are responsible for database curation and/or data entry/correction online via a password-protected interface. Databaseaccess is free of charge and there are no registration requirements for data querying. FINDbase provides a simple, web-based system for population-based mutation data collection and retrieval and can serve not only as a valuable online tool for molecular genetic testing of inherited disorders but also as a non-profit model for sustainable database funding, in the form of a 'database-journal'.
Low dose CT image restoration using a database of image patches
NASA Astrophysics Data System (ADS)
Ha, Sungsoo; Mueller, Klaus
2015-01-01
Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.
A Novel Approach: Chemical Relational Databases, and the ...
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as
Database constraints applied to metabolic pathway reconstruction tools.
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Yang, Xianrui; Xue, Chaoran; He, Yiruo; Zhao, Mengyuan; Luo, Mengqi; Wang, Peiqi; Bai, Ding
2018-01-01
Self-ligating brackets (SLBs) were compared to conventional brackets (CBs) regarding their effectiveness on transversal changes and space closure, as well as the efficiency of alignment and treatment time. All previously published randomized controlled clinical trials (RCTs) dealing with SLBs and CBs were searched via electronic databases, e.g., MEDLINE, Cochrane Central Register of Controlled Trials, EMBASE, World Health Organization International Clinical Trials Registry Platform, Chinese Biomedical Literature Database, and China National Knowledge Infrastructure. In addition, relevant journals were searched manually. Data extraction was performed independently by two reviewers and assessment of the risk of bias was executed using Cochrane Collaboration's tool. Discrepancies were resolved by discussion with a third reviewer. Meta-analyses were conducted using Review Manager (version 5.3). A total of 976 patients in 17 RCTs were included in the study, of which 11 could be produced quantitatively and 2 showed a low risk of bias. Meta-analyses were found to favor CB for mandibular intercanine width expansion, while passive SLBs were more effective in posterior expansion. Moreover, CBs had an apparent advantage during short treatment periods. However, SLBs and CBs did not differ in closing spaces. Based on current clinical evidence obtained from RCTs, SLBs do not show clinical superiority compared to CBs in expanding transversal dimensions, space closure, or orthodontic efficiency. Further high-level studies involving randomized, controlled, clinical trials are warranted to confirm these results.
The effectiveness of corticotomy and piezocision on canine retraction: A systematic review.
Viwattanatipa, Nita; Charnchairerk, Satadarun
2018-05-01
The aim of this systematic review was to evaluate the effectiveness and complications of corticotomy and piezocision in canine retraction. Five electronic databases (PubMed, SCOPUS, Web of Science, Embase, and CENTRAL) were searched for articles published up to July 2017. The databases were searched for randomized control trials (RCTs), with a split-mouth design, using either corticotomy or piezocision. The primary outcome reported for canine retraction was either the amount of tooth movement, rate of tooth movement, or treatment time. The secondary outcome was complications. The selection process was based on the PRISMA guidelines. A risk of bias assessment was also performed. Our search retrieved 530 abstracts. However, only five RCTs were finally included. Corticotomy showed a more significant (i.e., 2 to 4 times faster) increase in the rate of tooth movement than did the conventional method. For piezocision, both accumulative tooth movement and rate of tooth movement were twice faster than those of the conventional method. Corticotomy (with a flap design avoiding marginal bone incision) or flapless piezocision procedures were not detrimental to periodontal health. Nevertheless, piezocision resulted in higher levels of patient satisfaction. The main limitation of this study was the limited number of primary research publications on both techniques. For canine retraction into the immediate premolar extraction site, the rate of canine movement after piezocision was almost comparable to that of corticotomy with only buccal flap elevation.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S
2013-01-01
Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.
NASA Astrophysics Data System (ADS)
Kumar, Gaurav; Kumar, Ashok
2017-11-01
Structural control has gained significant attention in recent times. The standalone issue of power requirement during an earthquake has already been solved up to a large extent by designing semi-active control systems using conventional linear quadratic control theory, and many other intelligent control algorithms such as fuzzy controllers, artificial neural networks, etc. In conventional linear-quadratic regulator (LQR) theory, it is customary to note that the values of the design parameters are decided at the time of designing the controller and cannot be subsequently altered. During an earthquake event, the response of the structure may increase or decrease, depending the quasi-resonance occurring between the structure and the earthquake. In this case, it is essential to modify the value of the design parameters of the conventional LQR controller to obtain optimum control force to mitigate the vibrations due to the earthquake. A few studies have been done to sort out this issue but in all these studies it was necessary to maintain a database of the earthquake. To solve this problem and to find the optimized design parameters of the LQR controller in real time, a fast Fourier transform and particle swarm optimization based modified linear quadratic regulator method is presented here. This method comprises four different algorithms: particle swarm optimization (PSO), the fast Fourier transform (FFT), clipped control algorithm and the LQR. The FFT helps to obtain the dominant frequency for every time window. PSO finds the optimum gain matrix through the real-time update of the weighting matrix R, thereby, dispensing with the experimentation. The clipped control law is employed to match the magnetorheological (MR) damper force with the desired force given by the controller. The modified Bouc-Wen phenomenological model is taken to recognize the nonlinearities in the MR damper. The assessment of the advised method is done by simulation of a three-story structure having an MR damper at the ground floor level subjected to three different near-fault historical earthquake time histories, and the outcomes are equated with those of simple conventional LQR. The results establish that the advised methodology is more effective than conventional LQR controllers in reducing inter-storey drift, relative displacement, and acceleration response.
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
How desertification research is addressed in Spain? Land versus Soil approaches
NASA Astrophysics Data System (ADS)
Barbero Sierra, Celia; Marques, María Jose; Ruiz, Manuel; Escadafal, Richard; Exbrayat, Williams; Akthar-Schuster, Mariam; El Haddadi, Anass
2013-04-01
This study intend to understand how desertification research is organised in a south Mediterranean country, as is Spain. It is part of a larger work addressing soil and land research and its relationships with stakeholders. This wider work aims to explain the weakness of the United Nation Convention to Combat Desertification (UNCCD), which devoid of a scientific advisory panel. Within this framework, we assume that a fitting coordination between scientific knowledge and a better flow of information between researchers and policy makers is needed in order to slow down and reverse the impacts of land degradation on drylands. With this purpose we conducted an in-depth study at national level in Spain. The initial work focused on a small sample of published references in scientific journals indexed in the Web of Science. It allowed us to identify the most common thematic approaches and working issues, as well as the corresponding institutions and research teams and the relationships between them. The preliminary results of this study pointed out that two prevalent approaches at this national level could be identified. The first one is related to applied science being sensitive to socio-economic issues, and the second one is related to basic science studying the soil in depth, but it is often disconnected from socio-economic factors. We also noticed that the Spanish research teams acknowledge the other Spanish teams in this subject, as frequent co-citations are found in their papers, nevertheless, they do not collaborate. We also realised that the Web of Science database does not collect the wide spectrum of sociology, economics and the human implications of land degradation which use to be included in books or reports related to desertification. A new wider database was built compiling references of Web of Science related to "desertification", "land", "soil", "development" and "Spain" adding references from other socioeconomic databases. In a second stage we used bibliometric techniques through the Tetralogie software and network analysis using UCINET software, to proceed to: 1. Identify the most referred themes based on the keywords provided by the authors and by the Web of Science platform itself. 2. Identify the relationships between the different topics being addressed and their approach to the desertification from a basic scientific vision (soil degradation) and/or from an applied science vision (land degradation). 3. Identify and evaluate the strenght of possible networks and links established between institutions and/or research teams.
Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon
2013-11-15
In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.
NASA Astrophysics Data System (ADS)
Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.
2010-04-01
The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.
NASA Astrophysics Data System (ADS)
Dziedzic, Adam; Mulawka, Jan
2014-11-01
NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.
IPD—the Immuno Polymorphism Database
Robinson, James; Halliwell, Jason A.; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G. E.
2013-01-01
The Immuno Polymorphism Database (IPD), http://www.ebi.ac.uk/ipd/ is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer-cell immunoglobulin-like receptors, IPD-MHC, a database of sequences of the major histocompatibility complex of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTDAB, which provides access to the European Searchable Tumour Cell-Line Database, a cell bank of immunologically characterized melanoma cell lines. The data is currently available online from the website and FTP directory. This article describes the latest updates and additional tools added to the IPD project. PMID:23180793
Generic Entity Resolution in Relational Databases
NASA Astrophysics Data System (ADS)
Sidló, Csaba István
Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases.
Truong, Kevin; Ikura, Mitsuhiko
2003-05-06
Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.
The Data Base and Decision Making in Public Schools.
ERIC Educational Resources Information Center
Hedges, William D.
1984-01-01
Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…
Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio
2015-03-01
In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.
Subscale Test Methods for Combustion Devices
NASA Technical Reports Server (NTRS)
Anderson, W. E.; Sisco, J. C.; Long, M. R.; Sung, I.-K.
2005-01-01
Stated goals for long-life LRE s have been between 100 and 500 cycles: 1) Inherent technical difficulty of accurately defining the transient and steady state thermochemical environments and structural response (strain); 2) Limited statistical basis on failure mechanisms and effects of design and operational variability; and 3) Very high test costs and budget-driven need to protect test hardware (aversion to test-to-failure). Ambitious goals will require development of new databases: a) Advanced materials, e.g., tailored composites with virtually unlimited property variations; b) Innovative functional designs to exploit full capabilities of advanced materials; and c) Different cycles/operations. Subscale testing is one way to address technical and budget challenges: 1) Prototype subscale combustors exposed to controlled simulated conditions; 2) Complementary to conventional laboratory specimen database development; 3) Instrumented with sensors to measure thermostructural response; and 4) Coupled with analysis
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media.
Ghaffari, Seyed Alireza; Caron, William-O; Loubier, Mathilde; Normandeau, Charles-O; Viens, Jeff; Lamhamedi, Mohammed S; Gosselin, Benoit; Messaddeq, Younes
2015-07-21
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3- in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications.
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media
Ghaffari, Seyed Alireza; Caron, William-O.; Loubier, Mathilde; Normandeau, Charles-O.; Viens, Jeff; Lamhamedi, Mohammed S.; Gosselin, Benoit; Messaddeq, Younes
2015-01-01
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3− in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications. PMID:26197322
SQLGEN: a framework for rapid client-server database application development.
Nadkarni, P M; Cheung, K H
1995-12-01
SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.
Shang, Aijing; Huwiler-Müntener, Karin; Nartey, Linda; Jüni, Peter; Dörig, Stephan; Sterne, Jonathan A C; Pewsner, Daniel; Egger, Matthias
Homoeopathy is widely used, but specific effects of homoeopathic remedies seem implausible. Bias in the conduct and reporting of trials is a possible explanation for positive findings of trials of both homoeopathy and conventional medicine. We analysed trials of homoeopathy and conventional medicine and estimated treatment effects in trials least likely to be affected by bias. Placebo-controlled trials of homoeopathy were identified by a comprehensive literature search, which covered 19 electronic databases, reference lists of relevant papers, and contacts with experts. Trials in conventional medicine matched to homoeopathy trials for disorder and type of outcome were randomly selected from the Cochrane Controlled Trials Register (issue 1, 2003). Data were extracted in duplicate and outcomes coded so that odds ratios below 1 indicated benefit. Trials described as double-blind, with adequate randomisation, were assumed to be of higher methodological quality. Bias effects were examined in funnel plots and meta-regression models. 110 homoeopathy trials and 110 matched conventional-medicine trials were analysed. The median study size was 65 participants (range ten to 1573). 21 homoeopathy trials (19%) and nine (8%) conventional-medicine trials were of higher quality. In both groups, smaller trials and those of lower quality showed more beneficial treatment effects than larger and higher-quality trials. When the analysis was restricted to large trials of higher quality, the odds ratio was 0.88 (95% CI 0.65-1.19) for homoeopathy (eight trials) and 0.58 (0.39-0.85) for conventional medicine (six trials). Biases are present in placebo-controlled trials of both homoeopathy and conventional medicine. When account was taken for these biases in the analysis, there was weak evidence for a specific effect of homoeopathic remedies, but strong evidence for specific effects of conventional interventions. This finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.
Thomas, Robert Joseph; Mietus, Joseph E; Peng, Chung-Kang; Guo, Dan; Gozal, David; Montgomery-Downs, Hawley; Gottlieb, Daniel J; Wang, Cheng-Yen; Goldberger, Ary L
2014-01-01
The physiologic relationship between slow-wave activity (SWA) (0-4 Hz) on the electroencephalogram (EEG) and high-frequency (0.1-0.4 Hz) cardiopulmonary coupling (CPC) derived from electrocardiogram (ECG) sleep spectrograms is not known. Because high-frequency CPC appears to be a biomarker of stable sleep, we tested the hypothesis that that slow-wave EEG power would show a relatively fixed-time relationship to periods of high-frequency CPC. Furthermore, we speculated that this correlation would be independent of conventional nonrapid eye movement (NREM) sleep stages. We analyzed selected datasets from an archived polysomnography (PSG) database, the Sleep Heart Health Study I (SHHS-I). We employed the cross-correlation technique to measure the degree of which 2 signals are correlated as a function of a time lag between them. Correlation analyses between high-frequency CPC and delta power (computed both as absolute and normalized values) from 3150 subjects with an apnea-hypopnea index (AHI) of ≤5 events per hour of sleep were performed. The overall correlation (r) between delta power and high-frequency coupling (HFC) power was 0.40±0.18 (P=.001). Normalized delta power provided improved correlation relative to absolute delta power. Correlations were somewhat reduced in the second half relative to the first half of the night (r=0.45±0.20 vs r=0.34±0.23). Correlations were only affected by age in the eighth decade. There were no sex differences and only small racial or ethnic differences were noted. These results support a tight temporal relationship between slow wave power, both within and outside conventional slow wave sleep periods, and high frequency cardiopulmonary coupling, an ECG-derived biomarker of "stable" sleep. These findings raise mechanistic questions regarding the cross-system integration of neural and cardiopulmonary control during sleep. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Torgerson, Jordan L.; Clare, Loren P.; Pang, Jackson
2011-01-01
The Interplanetary Overlay Net - working Protocol Accelerator (IONAC) described previously in The Inter - planetary Overlay Networking Protocol Accelerator (NPO-45584), NASA Tech Briefs, Vol. 32, No. 10, (October 2008) p. 106 (http://www.techbriefs.com/component/ content/article/3317) provides functions that implement the Delay Tolerant Networking (DTN) bundle protocol. New missions that require high-speed downlink-only use of DTN can now be accommodated by the unidirectional IONAC-Lite to support high data rate downlink mission applications. Due to constrained energy resources, a conventional software implementation of the DTN protocol can provide only limited throughput for any given reasonable energy consumption rate. The IONAC-Lite DTN Protocol Accelerator is able to reduce this energy consumption by an order of magnitude and increase the throughput capability by two orders of magnitude. In addition, a conventional DTN implementation requires a bundle database with a considerable storage requirement. In very high downlink datarate missions such as near-Earth radar science missions, the storage space utilization needs to be maximized for science data and minimized for communications protocol-related storage needs. The IONAC-Lite DTN Protocol Accelerator is implemented in a reconfigurable hardware device to accomplish exactly what s needed for high-throughput DTN downlink-only scenarios. The following are salient features of the IONAC-Lite implementation: An implementation of the Bundle Protocol for an environment that requires a very high rate bundle egress data rate. The C&DH (command and data handling) subsystem is also expected to be very constrained so the interaction with the C&DH processor and the temporary storage are minimized. Fully pipelined design so that bundle processing database is not required. Implements a lookup table-based approach to eliminate multi-pass processing requirement imposed by the Bundle Protocol header s length field structure and the SDNV (self-delimiting numeric value) data field formatting. 8-bit parallel datapath to support high data-rate missions. Reduced resource utilization implementation for missions that do not require custody transfer features. There was no known implementation of the DTN protocol in a field programmable gate array (FPGA) device prior to the current implementation. The combination of energy and performance optimization that embodies this design makes the work novel.
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
Toward unification of taxonomy databases in a distributed computer environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi
1994-12-31
All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less
Impact of database quality in knowledge-based treatment planning for prostate cancer.
Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D
2018-03-13
This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P < .001), whereas CPD predictions for rectum D 10 (P = .005) and D 30 (P < .001) were significantly less than MCOD predictions. KBP predictions were statistically achievable in the replans for all predicted dose-volumes, excluding D 10 of bladder (P = .03) and rectum (P = .04). Compared with clinical plans, replans showed significant average reductions in D mean for bladder (7.8 Gy; P < .001) and rectum (9.4 Gy; P < .001), while maintaining statistically similar planning target volume, femoral head, and penile bulb dose. KBP dose-volume predictions derived from Pareto plans were more optimal overall than those resulting from manually optimized clinical plans, which significantly improved KBP-assisted plan quality. This work investigates how the plan quality of knowledge databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.
A Systematic Review of Economic Evaluations of Pacemaker Telemonitoring Systems.
López-Villegas, Antonio; Catalán-Matamoros, Daniel; Martín-Saborido, Carlos; Villegas-Tripiana, Irene; Robles-Musso, Emilio
2016-02-01
Over the last decade, telemedicine applied to pacemaker monitoring has undergone extraordinary growth. It is not known if telemonitoring is more or less efficient than conventional monitoring. The aim of this study was to carry out a systematic review analyzing the available evidence on resource use and health outcomes in both follow-up modalities. We searched 11 databases and included studies published up until November 2014. The inclusion criteria were: a) experimental or observational design; b) studies based on complete economic evaluations; c) patients with pacemakers, and d) telemonitoring compared with conventional hospital monitoring. Seven studies met the inclusion criteria, providing information on 2852 patients, with a mean age of 81 years. The main indication for device implantation was atrioventricular block. With telemonitoring, cardiovascular events were detected and treated 2 months earlier than with conventional monitoring, thus reducing length of hospital stay by 34% and reducing routine and emergency hospital visits as well. There were no significant intergroup differences in perceived quality of life or number of adverse events. The cost of telemonitoring was 60% lower than that of conventional hospital monitoring. Compared with conventional monitoring, cardiovascular events were detected earlier and the number or hospitalizations and hospital visits was reduced with pacemaker telemonitoring. In addition, the costs associated with follow-up were lower with telemonitoring. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
P2P proteomics -- data sharing for enhanced protein identification
2012-01-01
Background In order to tackle the important and challenging problem in proteomics of identifying known and new protein sequences using high-throughput methods, we propose a data-sharing platform that uses fully distributed P2P technologies to share specifications of peer-interaction protocols and service components. By using such a platform, information to be searched is no longer centralised in a few repositories but gathered from experiments in peer proteomics laboratories, which can subsequently be searched by fellow researchers. Methods The system distributively runs a data-sharing protocol specified in the Lightweight Communication Calculus underlying the system through which researchers interact via message passing. For this, researchers interact with the system through particular components that link to database querying systems based on BLAST and/or OMSSA and GUI-based visualisation environments. We have tested the proposed platform with data drawn from preexisting MS/MS data reservoirs from the 2006 ABRF (Association of Biomolecular Resource Facilities) test sample, which was extensively tested during the ABRF Proteomics Standards Research Group 2006 worldwide survey. In particular we have taken the data available from a subset of proteomics laboratories of Spain's National Institute for Proteomics, ProteoRed, a network for the coordination, integration and development of the Spanish proteomics facilities. Results and Discussion We performed queries against nine databases including seven ProteoRed proteomics laboratories, the NCBI Swiss-Prot database and the local database of the CSIC/UAB Proteomics Laboratory. A detailed analysis of the results indicated the presence of a protein that was supported by other NCBI matches and highly scored matches in several proteomics labs. The analysis clearly indicated that the protein was a relatively high concentrated contaminant that could be present in the ABRF sample. This fact is evident from the information that could be derived from the proposed P2P proteomics system, however it is not straightforward to arrive to the same conclusion by conventional means as it is difficult to discard organic contamination of samples. The actual presence of this contaminant was only stated after the ABRF study of all the identifications reported by the laboratories. PMID:22293032
System, method and apparatus for conducting a keyterm search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
Databases for rRNA gene profiling of microbial communities
Ashby, Matthew
2013-07-02
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did no...
MIPS: analysis and annotation of proteins from whole genomes
Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.
2004-01-01
The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354
MIPS: analysis and annotation of proteins from whole genomes.
Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A
2004-01-01
The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.
Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie
2017-01-01
Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.
Biothermal Model of Patient for Brain Hypothermia Treatment
NASA Astrophysics Data System (ADS)
Wakamatsu, Hidetoshi; Gaohua, Lu
A biothermal model of patient is proposed and verified for the brain hypothermia treatment, since the conventionally applied biothermal models are inappropriate for their unprecedented application. The model is constructed on the basis of the clinical practice of the pertinent therapy and characterized by the mathematical relation with variable ambient temperatures, in consideration of the clinical treatments such as the vital cardiopulmonary regulation. It has geometrically clear representation of multi-segmental core-shell structure, database of physiological and physical parameters with a systemic state equation setting the initial temperature of each compartment. Its step response gives the time constant about 3 hours in agreement with clinical knowledge. As for the essential property of the model, the dynamic temperature of its face-core compartment is realized, which corresponds to the tympanic membrane temperature measured under the practical anesthesia. From the various simulations consistent with the phenomena of clinical practice, it is concluded that the proposed model is appropriate for the theoretical analysis and clinical application to the brain hypothermia treatment.
Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc
2013-01-01
The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.
Using Solid State Drives as a Mid-Tier Cache in Enterprise Database OLTP Applications
NASA Astrophysics Data System (ADS)
Khessib, Badriddine M.; Vaid, Kushagra; Sankar, Sriram; Zhang, Chengliang
When originally introduced, flash based solid state drives (SSD) exhibited a very high random read throughput with low sub-millisecond latencies. However, in addition to their steep prices, SSDs suffered from slow write rates and reliability concerns related to cell wear. For these reasons, they were relegated to a niche status in the consumer and personal computer market. Since then, several architectural enhancements have been introduced that led to a substantial increase in random write operations as well as a reasonable improvement in reliability. From a purely performance point of view, these high I/O rates and improved reliability make the SSDs an ideal choice for enterprise On-Line Transaction Processing (OLTP) applications. However, from a price/performance point of view, the case for SSDs may not be clear. Enterprise class SSD Price/GB, continues to be at least 10x higher than conventional magnetic hard disk drives (HDD) despite considerable drop in Flash chip prices.