Evaluating the Impact of Database Heterogeneity on Observational Study Results
Madigan, David; Ryan, Patrick B.; Schuemie, Martijn; Stang, Paul E.; Overhage, J. Marc; Hartzema, Abraham G.; Suchard, Marc A.; DuMouchel, William; Berlin, Jesse A.
2013-01-01
Clinical studies that use observational databases to evaluate the effects of medical products have become commonplace. Such studies begin by selecting a particular database, a decision that published papers invariably report but do not discuss. Studies of the same issue in different databases, however, can and do generate different results, sometimes with strikingly different clinical implications. In this paper, we systematically study heterogeneity among databases, holding other study methods constant, by exploring relative risk estimates for 53 drug-outcome pairs and 2 widely used study designs (cohort studies and self-controlled case series) across 10 observational databases. When holding the study design constant, our analysis shows that estimated relative risks range from a statistically significant decreased risk to a statistically significant increased risk in 11 of 53 (21%) of drug-outcome pairs that use a cohort design and 19 of 53 (36%) of drug-outcome pairs that use a self-controlled case series design. This exceeds the proportion of pairs that were consistent across databases in both direction and statistical significance, which was 9 of 53 (17%) for cohort studies and 5 of 53 (9%) for self-controlled case series. Our findings show that clinical studies that use observational databases can be sensitive to the choice of database. More attention is needed to consider how the choice of data source may be affecting results. PMID:23648805
Designing Corporate Databases to Support Technology Innovation
ERIC Educational Resources Information Center
Gultz, Michael Jarett
2012-01-01
Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…
Ryan, Patrick B.; Schuemie, Martijn
2013-01-01
Background: Clinical studies that use observational databases, such as administrative claims and electronic health records, to evaluate the effects of medical products have become commonplace. These studies begin by selecting a particular study design, such as a case control, cohort, or self-controlled design, and different authors can and do choose different designs for the same clinical question. Furthermore, published papers invariably report the study design but do not discuss the rationale for the specific choice. Studies of the same clinical question with different designs, however, can generate different results, sometimes with strikingly different implications. Even within a specific study design, authors make many different analytic choices and these too can profoundly impact results. In this paper, we systematically study heterogeneity due to the type of study design and due to analytic choices within study design. Methods and findings: We conducted our analysis in 10 observational healthcare databases but mostly present our results in the context of the GE Centricity EMR database, an electronic health record database containing data for 11.2 million lives. We considered the impact of three different study design choices on estimates of associations between bisphosphonates and four particular health outcomes for which there is no evidence of an association. We show that applying alternative study designs can yield discrepant results, in terms of direction and significance of association. We also highlight that while traditional univariate sensitivity analysis may not show substantial variation, systematic assessment of all analytical choices within a study design can yield inconsistent results ranging from statistically significant decreased risk to statistically significant increased risk. Our findings show that clinical studies using observational databases can be sensitive both to study design choices and to specific analytic choices within study design. Conclusion: More attention is needed to consider how design choices may be impacting results and, when possible, investigators should examine a wide array of possible choices to confirm that significant findings are consistently identified. PMID:25083251
A Database Design and Development Case: NanoTEK Networks
ERIC Educational Resources Information Center
Ballenger, Robert M.
2010-01-01
This case provides a real-world project-oriented case study for students enrolled in a management information systems, database management, or systems analysis and design course in which database design and development are taught. The case consists of a business scenario to provide background information and details of the unique operating…
Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App
NASA Astrophysics Data System (ADS)
Nurnawati, E. K.; Ermawati, E.
2018-02-01
An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.
Building a medical image processing algorithm verification database
NASA Astrophysics Data System (ADS)
Brown, C. Wayne
2000-06-01
The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.
Front-End and Back-End Database Design and Development: Scholar's Academy Case Study
ERIC Educational Resources Information Center
Parks, Rachida F.; Hall, Chelsea A.
2016-01-01
This case study consists of a real database project for a charter school--Scholar's Academy--and provides background information on the school and its cafeteria processing system. Also included are functional requirements and some illustrative data. Students are tasked with the design and development of a database for the purpose of improving the…
The STEP database through the end-users eyes--USABILITY STUDY.
Salunke, Smita; Tuleu, Catherine
2015-08-15
The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.
Daniell, Nathan; Fraysse, François; Paul, Gunther
2012-01-01
Anthropometry has long been used for a range of ergonomic applications & product design. Although products are often designed for specific cohorts, anthropometric data are typically sourced from large scale surveys representative of the general population. Additionally, few data are available for emerging markets like China and India. This study measured 80 Chinese males that were representative of a specific cohort targeted for the design of a new product. Thirteen anthropometric measurements were recorded and compared to two large databases that represented a general population, a Chinese database and a Western database. Substantial differences were identified between the Chinese males measured in this study and both databases. The subjects were substantially taller, heavier and broader than subjects in the older Chinese database. However, they were still substantially smaller, lighter and thinner than Western males. Data from current Western anthropometric surveys are unlikely to accurately represent the target population for product designers and manufacturers in emerging markets like China.
ERIC Educational Resources Information Center
Deutsch, Donald R.
This report describes a research effort that was carried out over a period of several years to develop and demonstrate a methodology for evaluating proposed Database Management System designs. The major proposition addressed by this study is embodied in the thesis statement: Proposed database management system designs can be evaluated best through…
A case study for a digital seabed database: Bohai Sea engineering geology database
NASA Astrophysics Data System (ADS)
Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang
2006-07-01
This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
Database Software for Social Studies. A MicroSIFT Quarterly Report.
ERIC Educational Resources Information Center
Weaver, Dave
The report describes and evaluates the use of a set of learning tools called database managers and their creation of databases to help teach problem solving skills in social studies. Details include the design, building, and use of databases in a social studies setting, along with advantages and disadvantages of using them. The three types of…
Pemberton, T J; Jakobsson, M; Conrad, D F; Coop, G; Wall, J D; Pritchard, J K; Patel, P I; Rosenberg, N A
2008-07-01
When performing association studies in populations that have not been the focus of large-scale investigations of haplotype variation, it is often helpful to rely on genomic databases in other populations for study design and analysis - such as in the selection of tag SNPs and in the imputation of missing genotypes. One way of improving the use of these databases is to rely on a mixture of database samples that is similar to the population of interest, rather than using the single most similar database sample. We demonstrate the effectiveness of the mixture approach in the application of African, European, and East Asian HapMap samples for tag SNP selection in populations from India, a genetically intermediate region underrepresented in genomic studies of haplotype variation.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
Slushie World: An In-Class Access Database Tutorial
ERIC Educational Resources Information Center
Wynn, Donald E., Jr.; Pratt, Renée M. E.
2015-01-01
The Slushie World case study is designed to teach the basics of Microsoft Access and database management over a series of three 75-minute class sessions. Students are asked to build a basic database to track sales and inventory for a small business. Skills to be learned include table creation, data entry and importing, form and report design,…
Implementation of an open adoption research data management system for clinical studies.
Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate
2017-07-06
Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.
Resources | Division of Cancer Prevention
Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |
Osteoporosis therapies: evidence from health-care databases and observational population studies.
Silverman, Stuart L
2010-11-01
Osteoporosis is a well-recognized disease with severe consequences if left untreated. Randomized controlled trials are the most rigorous method for determining the efficacy and safety of therapies. Nevertheless, randomized controlled trials underrepresent the real-world patient population and are costly in both time and money. Modern technology has enabled researchers to use information gathered from large health-care or medical-claims databases to assess the practical utilization of available therapies in appropriate patients. Observational database studies lack randomization but, if carefully designed and successfully completed, can provide valuable information that complements results obtained from randomized controlled trials and extends our knowledge to real-world clinical patients. Randomized controlled trials comparing fracture outcomes among osteoporosis therapies are difficult to perform. In this regard, large observational database studies could be useful in identifying clinically important differences among therapeutic options. Database studies can also provide important information with regard to osteoporosis prevalence, health economics, and compliance and persistence with treatment. This article describes the strengths and limitations of both randomized controlled trials and observational database studies, discusses considerations for observational study design, and reviews a wealth of information generated by database studies in the field of osteoporosis.
2013-2014 Food and Nutrient Database for Dietary Studies Items Designated as Fortified
USDA-ARS?s Scientific Manuscript database
The Food and Nutrient Database for Dietary Studies (FNDDS) is used to convert food and beverages consumed in What We Eat in America, National Health and Nutrition Examination Survey (WWEIA, NHANES) into gram amounts and determine their nutrient values. The file of Items Designated as Fortified in F...
Applying cognitive load theory to the redesign of a conventional database systems course
NASA Astrophysics Data System (ADS)
Mason, Raina; Seton, Carolyn; Cooper, Graham
2016-01-01
Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional structure for a database course, covering database design first, then database development. Analysis showed the conventional course content was appropriate but the instructional materials used were too complex, especially for novice students. The redesign of instructional materials applied CLT to remove split attention and redundancy effects, to provide suitable worked examples and sub-goals, and included an extensive re-sequencing of content. The approach was primarily directed towards mid- to lower performing students and results showed a significant improvement for this cohort with the exam failure rate reducing by 34% after the redesign on identical final exams. Student satisfaction also increased and feedback from subsequent study was very positive. The application of CLT to the design of instructional materials is discussed for delivery of technical courses.
WMC Database Evaluation. Case Study Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palounek, Andrea P. T
The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.
Stang, Paul E; Ryan, Patrick B; Overhage, J Marc; Schuemie, Martijn J; Hartzema, Abraham G; Welebob, Emily
2013-10-01
Researchers using observational data to understand drug effects must make a number of analytic design choices that suit the characteristics of the data and the subject of the study. Review of the published literature suggests that there is a lack of consistency even when addressing the same research question in the same database. To characterize the degree of similarity or difference in the method and analysis choices made by observational database research experts when presented with research study scenarios. On-line survey using research scenarios on drug-effect studies to capture method selection and analysis choices that follow a dependency branching based on response to key questions. Voluntary participants experienced in epidemiological study design solicited for participation through registration on the Observational Medical Outcomes Partnership website, membership in particular professional organizations, or links in relevant newsletters. Description (proportion) of respondents selecting particular methods and making specific analysis choices based on individual drug-outcome scenario pairs. The number of questions/decisions differed based on stem questions of study design, time-at-risk, outcome definition, and comparator. There is little consistency across scenarios, by drug or by outcome of interest, in the decisions made for design and analyses in scenarios using large healthcare databases. The most consistent choice was the cohort study design but variability in the other critical decisions was common. There is great variation among epidemiologists in the design and analytical choices that they make when implementing analyses in observational healthcare databases. These findings confirm that it will be important to generate empiric evidence to inform these decisions and to promote a better understanding of the impact of standardization on research implementation.
The Recovery Care and Treatment Center: A Database Design and Development Case
ERIC Educational Resources Information Center
Harris, Ranida B.; Vaught, Kara L.
2008-01-01
The advantages of active learning methodologies have been suggested and empirically shown by a number of IS educators. Case studies are one such teaching technique that offers students the ability to think analytically, apply material learned, and solve a real-world problem. This paper presents a case study designed to be used in a database design…
Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam
2017-09-01
Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Use of Patient Registries and Administrative Datasets for the Study of Pediatric Cancer
Rice, Henry E.; Englum, Brian R.; Gulack, Brian C.; Adibe, Obinna O.; Tracy, Elizabeth T.; Kreissman, Susan G.; Routh, Jonathan C.
2015-01-01
Analysis of data from large administrative databases and patient registries is increasingly being used to study childhood cancer care, although the value of these data sources remains unclear to many clinicians. Interpretation of large databases requires a thorough understanding of how the dataset was designed, how data were collected, and how to assess data quality. This review will detail the role of administrative databases and registry databases for the study of childhood cancer, tools to maximize information from these datasets, and recommendations to improve the use of these databases for the study of pediatric oncology. PMID:25807938
The Lifeways Cross-Generation Study: design, recruitment and data management considerations.
O'Mahony, D; Fallon, U B; Hannon, F; Kloeckner, K; Avalos, G; Murphy, A W; Kelleher, C C
2007-09-01
The Lifeways Cross-Generation Cohort Study was first established in 2001 and is a unique longitudinal database in Ireland, with currently over three and a half thousand family participants derived from 1124 mothers recruited initially during pregnancy, mainly during 2002. The database comprises a) baseline self-reported health data for all mothers, a third of fathers and at least one grandparent b) clinical hospital data at recruitment, c) three year follow-up data from the families' General Practitioners, and d) linkage to hospital and vaccination databases. Data collection for the five-year follow-up with parents is underway, continuing through 2007. Because there is at present no single national/regional health information system in Ireland, original data instruments were designed to capture data directly from family members and through their hospitals and healthcare providers. A system of relational databases was designed to coordinate data capture for a complex array of study instruments and to facilitate tracking of family members at different time points.
Marklin, Richard W; Saginus, Kyle A; Seeley, Patricia; Freier, Stephen H
2010-12-01
The primary purpose of this study was to determine whether conventional anthropometric databases of the U.S. general population are applicable to the population of U.S. electric utility field-workers. On the basis of anecdotal observations, field-workers for electric power utilities were thought to be generally taller and larger than the general population. However, there were no anthropometric data available on this population, and it was not known whether the conventional anthropometric databases could be used to design for this population. For this study, 3 standing and II sitting anthropometric measurements were taken from 187 male field-workers from three electric power utilities located in the upper Midwest of the United States and Southern California. The mean and percentile anthropometric data from field-workers were compared with seven well-known conventional anthropometric databases for North American males (United States, Canada, and Mexico). In general, the male field-workers were taller and heavier than the people in the reference databases for U.S. males. The field-workers were up to 2.3 cm taller and 10 kg to 18 kg heavier than the averages of the reference databases. This study was justified, as it showed that the conventional anthropometric databases of the general population underestimated the size of electric utility field-workers, particularly with respect to weight. When designing vehicles and tools for electric utility field-workers, designers and ergonomists should consider the population being designed for and the data from this study to maximize safety, minimize risk of injuries, and optimize performance.
CHAD User's Guide: Extracting Human Activity Information from CHAD on the PC
User manual that includes tutorials, what's inside the CHAD databases, background on individuals studies in CHAD, using data form individual studies, caveats, problems, notes, and database design and development.
Use of Software Tools in Teaching Relational Database Design.
ERIC Educational Resources Information Center
McIntyre, D. R.; And Others
1995-01-01
Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
Reinforcement learning interfaces for biomedical database systems.
Rudowsky, I; Kulyba, O; Kunin, M; Parsons, S; Raphan, T
2006-01-01
Studies of neural function that are carried out in different laboratories and that address different questions use a wide range of descriptors for data storage, depending on the laboratory and the individuals that input the data. A common approach to describe non-textual data that are referenced through a relational database is to use metadata descriptors. We have recently designed such a prototype system, but to maintain efficiency and a manageable metadata table, free formatted fields were designed as table entries. The database interface application utilizes an intelligent agent to improve integrity of operation. The purpose of this study was to investigate how reinforcement learning algorithms can assist the user in interacting with the database interface application that has been developed to improve the performance of the system.
Medication safety research by observational study design.
Lao, Kim S J; Chui, Celine S L; Man, Kenneth K C; Lau, Wallis C Y; Chan, Esther W; Wong, Ian C K
2016-06-01
Observational studies have been recognised to be essential for investigating the safety profile of medications. Numerous observational studies have been conducted on the platform of large population databases, which provide adequate sample size and follow-up length to detect infrequent and/or delayed clinical outcomes. Cohort and case-control are well-accepted traditional methodologies for hypothesis testing, while within-individual study designs are developing and evolving, addressing previous known methodological limitations to reduce confounding and bias. Respective examples of observational studies of different study designs using medical databases are shown. Methodology characteristics, study assumptions, strengths and weaknesses of each method are discussed in this review.
Factors Influencing Error Recovery in Collections Databases: A Museum Case Study
ERIC Educational Resources Information Center
Marty, Paul F.
2005-01-01
This article offers an analysis of the process of error recovery as observed in the development and use of collections databases in a university museum. It presents results from a longitudinal case study of the development of collaborative systems and practices designed to reduce the number of errors found in the museum's databases as museum…
Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?
ERIC Educational Resources Information Center
Riley, Cheryl; Wales, Barbara
Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…
PathwayAccess: CellDesigner plugins for pathway databases.
Van Hemert, John L; Dickerson, Julie A
2010-09-15
CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.
Designing an Integrated System of Databases: A Workstation for Information Seekers.
ERIC Educational Resources Information Center
Micco, Mary; Smith, Irma
1987-01-01
Proposes a framework for the design of a full function workstation for information retrieval based on study of information seeking behavior. A large amount of local storage of the CD-ROM jukebox variety and full networking capability to both local and external databases are identified as requirements of the prototype. (MES)
The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.
Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young
2016-03-01
Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.
CHAD USER’S GUIDE: Extracting Human Activity Information from CHAD on the PC
The Consolidated Human Activity Database (CHAD) User Guide offers a short tutorial about CHAD Access; background on the CHAD Databases; background on individual studies in CHAD; and information about using CHAD data, caveats, known problems, notes, and database design and develop...
ERIC Educational Resources Information Center
Cavaleri, Piero
2008-01-01
Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks. Findings:…
ERIC Educational Resources Information Center
George, Carole A.
This document describes a study that designed, developed, and evaluated the Pennsylvania school-district database program for use by educational decision makers. The database contains current information developed from data provided by the Pennsylvania Department of Education and describes each of the 500 active school districts in the state. PEP…
Archetype relational mapping - a practical openEHR persistence solution.
Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong
2015-11-05
One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
Designing a Zoo-Based Endangered Species Database.
ERIC Educational Resources Information Center
Anderson, Christopher L.
1989-01-01
Presented is a class activity that uses the database feature of the Appleworks program to create a database from which students may study endangered species. The use of a local zoo as a base of information about the animals is suggested. Procedures and follow-up activities are included. (CW)
Modeling Real-Time Applications with Reusable Design Patterns
NASA Astrophysics Data System (ADS)
Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik
Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.
Anthropometry of Brazilian Air Force pilots.
da Silva, Gilvan V; Halpern, Manny; Gordon, Claire C
2017-10-01
Anthropometric data are essential for the design of military equipment including sizing of aircraft cockpits and personal gear. Currently, there are no anthropometric databases specific to Brazilian military personnel. The aim of this study was to create a Brazilian anthropometric database of Air Force pilots. The methods, protocols, descriptions, definitions, landmarks, tools and measurements procedures followed the instructions outlined in Measurer's Handbook: US Army and Marine Corps Anthropometric Surveys, 2010-2011 - NATICK/TR-11/017. The participants were measured countrywide, in all five Brazilian Geographical Regions. Thirty-nine anthropometric measurements related to cockpit design were selected. The results of 2133 males and 206 females aged 16-52 years constitute a set of basic data for cockpit design, space arrangement issues and adjustments, protective gear and equipment design, as well as for digital human modelling. Another important implication is that this study can be considered a starting point for reducing gender bias in women's career as pilots. Practitioner Summary: This paper describes the first large-scale anthropometric survey of the Brazilian Air Force pilots and the development of the related database. This study provides critical data for improving aircraft cockpit design for ergonomics and comprehensive pilot accommodation, protective gear and uniform design, as well as digital human modelling.
Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid
2014-03-01
National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.
2009-01-01
Background Polymerase chain reaction (PCR) is very useful in many areas of molecular biology research. It is commonly observed that PCR success is critically dependent on design of an effective primer pair. Current tools for primer design do not adequately address the problem of PCR failure due to mis-priming on target-related sequences and structural variations in the genome. Methods We have developed an integrated graphical web-based application for primer design, called RExPrimer, which was written in Python language. The software uses Primer3 as the primer designing core algorithm. Locally stored sequence information and genomic variant information were hosted on MySQLv5.0 and were incorporated into RExPrimer. Results RExPrimer provides many functionalities for improved PCR primer design. Several databases, namely annotated human SNP databases, insertion/deletion (indel) polymorphisms database, pseudogene database, and structural genomic variation databases were integrated into RExPrimer, enabling an effective without-leaving-the-website validation of the resulting primers. By incorporating these databases, the primers reported by RExPrimer avoid mis-priming to related sequences (e.g. pseudogene, segmental duplication) as well as possible PCR failure because of structural polymorphisms (SNP, indel, and copy number variation (CNV)). To prevent mismatching caused by unexpected SNPs in the designed primers, in particular the 3' end (SNP-in-Primer), several SNP databases covering the broad range of population-specific SNP information are utilized to report SNPs present in the primer sequences. Population-specific SNP information also helps customize primer design for a specific population. Furthermore, RExPrimer offers a graphical user-friendly interface through the use of scalable vector graphic image that intuitively presents resulting primers along with the corresponding gene structure. In this study, we demonstrated the program effectiveness in successfully generating primers for strong homologous sequences. Conclusion The improvements for primer design incorporated into RExPrimer were demonstrated to be effective in designing primers for challenging PCR experiments. Integration of SNP and structural variation databases allows for robust primer design for a variety of PCR applications, irrespective of the sequence complexity in the region of interest. This software is freely available at http://www4a.biotec.or.th/rexprimer. PMID:19958502
Automating Relational Database Design for Microcomputer Users.
ERIC Educational Resources Information Center
Pu, Hao-Che
1991-01-01
Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…
Earlinet database: new design and new products for a wider use of aerosol lidar data
NASA Astrophysics Data System (ADS)
Mona, Lucia; D'Amico, Giuseppe; Amato, Francesco; Linné, Holger; Baars, Holger; Wandinger, Ulla; Pappalardo, Gelsomina
2018-04-01
The EARLINET database is facing a complete reshaping to meet the wide request for more intuitive products and to face the even wider request related to the new initiatives such as Copernicus, the European Earth observation programme. The new design has been carried out in continuity with the past, to take advantage from long-term database. In particular, the new structure will provide information suitable for synergy with other instruments, near real time (NRT) applications, validation and process studies and climate applications.
Access to digital library databases in higher education: design problems and infrastructural gaps.
Oswal, Sushil K
2014-01-01
After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.
How Many People Search the ERIC Database Each Day?
ERIC Educational Resources Information Center
Rudner, Lawrence
This study estimated the number of people searching the ERIC database each day. The Educational Resources Information Center (ERIC) is a national information system designed to provide ready access to an extensive body of education-related literature. Federal funds traditionally have paid for the development of the database, but not the…
Video Databases: An Emerging Tool in Business Education
ERIC Educational Resources Information Center
MacKinnon, Gregory; Vibert, Conor
2014-01-01
A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…
Comparing Top-Down with Bottom-Up Approaches: Teaching Data Modeling
ERIC Educational Resources Information Center
Kung, Hsiang-Jui; Kung, LeeAnn; Gardiner, Adrian
2013-01-01
Conceptual database design is a difficult task for novice database designers, such as students, and is also therefore particularly challenging for database educators to teach. In the teaching of database design, two general approaches are frequently emphasized: top-down and bottom-up. In this paper, we present an empirical comparison of students'…
DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT
Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...
Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course
ERIC Educational Resources Information Center
Mason, Raina; Seton, Carolyn; Cooper, Graham
2016-01-01
Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional…
A dynamic clinical dental relational database.
Taylor, D; Naguib, R N G; Boulton, S
2004-09-01
The traditional approach to relational database design is based on the logical organization of data into a number of related normalized tables. One assumption is that the nature and structure of the data is known at the design stage. In the case of designing a relational database to store historical dental epidemiological data from individual clinical surveys, the structure of the data is not known until the data is presented for inclusion into the database. This paper addresses the issues concerned with the theoretical design of a clinical dynamic database capable of adapting the internal table structure to accommodate clinical survey data, and presents a prototype database application capable of processing, displaying, and querying the dental data.
A dedicated database system for handling multi-level data in systems biology.
Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens
2014-01-01
Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
Developing and Refining the Taiwan Birth Cohort Study (TBCS): Five Years of Experience
ERIC Educational Resources Information Center
Lung, For-Wey; Chiang, Tung-Liang; Lin, Shio-Jean; Shu, Bih-Ching; Lee, Meng-Chih
2011-01-01
The Taiwan Birth Cohort Study (TBCS) is the first nationwide birth cohort database in Asia designed to establish national norms of children's development. Several challenges during database development and data analysis were identified. Challenges include sampling methods, instrument development and statistical approach to missing data. The…
WASP: a Web-based Allele-Specific PCR assay designing tool for detecting SNPs and mutations
Wangkumhang, Pongsakorn; Chaichoompu, Kridsadakorn; Ngamphiw, Chumpol; Ruangrit, Uttapong; Chanprasert, Juntima; Assawamakin, Anunchai; Tongsima, Sissades
2007-01-01
Background Allele-specific (AS) Polymerase Chain Reaction is a convenient and inexpensive method for genotyping Single Nucleotide Polymorphisms (SNPs) and mutations. It is applied in many recent studies including population genetics, molecular genetics and pharmacogenomics. Using known AS primer design tools to create primers leads to cumbersome process to inexperience users since information about SNP/mutation must be acquired from public databases prior to the design. Furthermore, most of these tools do not offer the mismatch enhancement to designed primers. The available web applications do not provide user-friendly graphical input interface and intuitive visualization of their primer results. Results This work presents a web-based AS primer design application called WASP. This tool can efficiently design AS primers for human SNPs as well as mutations. To assist scientists with collecting necessary information about target polymorphisms, this tool provides a local SNP database containing over 10 million SNPs of various populations from public domain databases, namely NCBI dbSNP, HapMap and JSNP respectively. This database is tightly integrated with the tool so that users can perform the design for existing SNPs without going off the site. To guarantee specificity of AS primers, the proposed system incorporates a primer specificity enhancement technique widely used in experiment protocol. In particular, WASP makes use of different destabilizing effects by introducing one deliberate 'mismatch' at the penultimate (second to last of the 3'-end) base of AS primers to improve the resulting AS primers. Furthermore, WASP offers graphical user interface through scalable vector graphic (SVG) draw that allow users to select SNPs and graphically visualize designed primers and their conditions. Conclusion WASP offers a tool for designing AS primers for both SNPs and mutations. By integrating the database for known SNPs (using gene ID or rs number), this tool facilitates the awkward process of getting flanking sequences and other related information from public SNP databases. It takes into account the underlying destabilizing effect to ensure the effectiveness of designed primers. With user-friendly SVG interface, WASP intuitively presents resulting designed primers, which assist users to export or to make further adjustment to the design. This software can be freely accessed at . PMID:17697334
Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B
2018-06-03
There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
ERIC Educational Resources Information Center
Li, Rui; Liu, Min
2007-01-01
The purpose of this study is to examine the potential of using computer databases as cognitive tools to share learners' cognitive load and facilitate learning in a multimedia problem-based learning (PBL) environment designed for sixth graders. Two research questions were: (a) can the computer database tool share sixth-graders' cognitive load? and…
MPD3: a useful medicinal plants database for drug designing.
Mumtaz, Arooj; Ashfaq, Usman Ali; Ul Qamar, Muhammad Tahir; Anwar, Farooq; Gulzar, Faisal; Ali, Muhammad Amjad; Saari, Nazamid; Pervez, Muhammad Tariq
2017-06-01
Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .
Spatial Designation of Critical Habitats for Endangered and Threatened Species in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuttle, Mark A; Singh, Nagendra; Sabesan, Aarthy
Establishing biological reserves or "hot spots" for endangered and threatened species is critical to support real-world species regulatory and management problems. Geographic data on the distribution of endangered and threatened species can be used to improve ongoing efforts for species conservation in the United States. At present no spatial database exists which maps out the location endangered species for the US. However, spatial descriptions do exists for the habitat associated with all endangered species, but in a form not readily suitable to use in a geographic information system (GIS). In our study, the principal challenge was extracting spatial data describingmore » these critical habitats for 472 species from over 1000 pages of the federal register. In addition, an appropriate database schema was designed to accommodate the different tiers of information associated with the species along with the confidence of designation; the interpreted location data was geo-referenced to the county enumeration unit producing a spatial database of endangered species for the whole of US. The significance of these critical habitat designations, database scheme and methodologies will be discussed.« less
The methodology of database design in organization management systems
NASA Astrophysics Data System (ADS)
Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.
2017-01-01
The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
A Dynamic Approach to Make CDS/ISIS Databases Interoperable over the Internet Using the OAI Protocol
ERIC Educational Resources Information Center
Jayakanth, F.; Maly, K.; Zubair, M.; Aswath, L.
2006-01-01
Purpose: A dynamic approach to making legacy databases, like CDS/ISIS, interoperable with OAI-compliant digital libraries (DLs). Design/methodology/approach: There are many bibliographic databases that are being maintained using legacy database systems. CDS/ISIS is one such legacy database system. It was designed and developed specifically for…
Relational Database Design in Information Science Education.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1985-01-01
Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…
Design and implementation of a twin-family database for behavior genetics and genomics studies.
Boomsma, Dorret I; Willemsen, Gonneke; Vink, Jacqueline M; Bartels, Meike; Groot, Paul; Hottenga, Jouke Jan; van Beijsterveldt, C E M Toos; Stroet, Therese; van Dijk, Rob; Wertheim, Rien; Visser, Marco; van der Kleij, Frank
2008-06-01
In this article we describe the design and implementation of a database for extended twin families. The database does not focus on probands or on index twins, as this approach becomes problematic when larger multigenerational families are included, when more than one set of multiples is present within a family, or when families turn out to be part of a larger pedigree. Instead, we present an alternative approach that uses a highly flexible notion of persons and relations. The relations among the subjects in the database have a one-to-many structure, are user-definable and extendible and support arbitrarily complicated pedigrees. Some additional characteristics of the database are highlighted, such as the storage of historical data, predefined expressions for advanced queries, output facilities for individuals and relations among individuals and an easy-to-use multi-step wizard for contacting participants. This solution presents a flexible approach to accommodate pedigrees of arbitrary size, multiple biological and nonbiological relationships among participants and dynamic changes in these relations that occur over time, which can be implemented for any type of multigenerational family study.
Zeni, Mary Beth
2012-03-01
The purpose of this study was to evaluate if paediatric asthma educational intervention studies included in the Cochrane Collaboration database incorporated concepts of health literacy. Inclusion criteria were established to identify review categories in the Cochrane Collaboration database specific to paediatric asthma educational interventions. Articles that met the inclusion criteria were selected from the Cochrane Collaboration database in 2010. The health literacy definition from Healthy People 2010 was used to develop a 4-point a priori rating scale to determine the extent a study reported aspects of health literacy in the development of an educational intervention for parents and/or children. Five Cochrane review categories met the inclusion criteria; 75 studies were rated for health literacy content regarding educational interventions with families and children living with asthma. A priori criteria were used for the rating process. While 52 (69%) studies had no information pertaining to health literacy, 23 (31%) reported an aspect of health literacy. Although all studies maintained the rigorous standards of randomized clinical trials, a model of health literacy was not reported regarding the design and implementation of interventions. While a more comprehensive health literacy model for the development of educational interventions with families and children may have been available after the reviewed studies were conducted, general literacy levels still could have been addressed. The findings indicate a need to incorporate health literacy in the design of client-centred educational interventions and in the selection criteria of relevant Cochrane reviews. Inclusion assures that health literacy is as important as randomization and statistical analyses in the research design of educational interventions and may even assure participation of people with literacy challenges. © 2012 The Author. International Journal of Evidence-Based Healthcare © 2012 The Joanna Briggs Institute.
Vieira, Vanessa Pedrosa; De Biase, Noemi; Peccin, Maria Stella; Atallah, Alvaro Nagib
2009-06-01
To evaluate the methodological adequacy of voice and laryngeal study designs published in speech-language pathology and otorhinolaryngology journals indexed for the ISI Web of Knowledge (ISI Web) and the MEDLINE database. A cross-sectional study conducted at the Universidade Federal de São Paulo (Federal University of São Paulo). Two Brazilian speech-language pathology and otorhinolaryngology journals (Pró-Fono and Revista Brasileira de Otorrinolaringologia) and two international speech-language pathology and otorhinolaryngology journals (Journal of Voice, Laryngoscope), all dated between 2000 and 2004, were hand-searched by specialists. Subsequently, voice and larynx publications were separated, and a speech-language pathologist and otorhinolaryngologist classified 374 articles from the four journals according to objective and study design. The predominant objective contained in the articles was that of primary diagnostic evaluation (27%), and the most frequent study design was case series (33.7%). A mere 7.8% of the studies were designed adequately with respect to the stated objectives. There was no statistical difference in the methodological quality of studies indexed for the ISI Web and the MEDLINE database. The studies published in both national journals, indexed for the MEDLINE database, and international journals, indexed for the ISI Web, demonstrate weak methodology, with research poorly designed to meet the proposed objectives. There is much scientific work to be done in order to decrease uncertainty in the field analysed.
Initiation of a Database of CEUS Ground Motions for NGA East
NASA Astrophysics Data System (ADS)
Cramer, C. H.
2007-12-01
The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.
Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe
2015-04-01
The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2014 CFR
2014-10-01
... individual database managers; and to perform other functions as needed for the administration of the TV bands... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers...
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Database Design to Ensure Anonymous Study of Medical Errors: A Report from the ASIPS collaborative
Pace, Wilson D.; Staton, Elizabeth W.; Higgins, Gregory S.; Main, Deborah S.; West, David R.; Harris, Daniel M.
2003-01-01
Medical error reporting systems are important information sources for designing strategies to improve the safety of health care. Applied Strategies for Improving Patient Safety (ASIPS) is a multi-institutional, practice-based research project that collects and analyzes data on primary care medical errors and develops interventions to reduce error. The voluntary ASIPS Patient Safety Reporting System captures anonymous and confidential reports of medical errors. Confidential reports, which are quickly de-identified, provide better detail than do anonymous reports; however, concerns exist about the confidentiality of those reports should the database be subject to legal discovery or other security breaches. Standard database elements, for example, serial ID numbers, date/time stamps, and backups, could enable an outsider to link an ASIPS report to a specific medical error. The authors present the design and implementation of a database and administrative system that reduce this risk, facilitate research, and maintain near anonymity of the events, practices, and clinicians. PMID:12925548
Database System Design and Implementation for Marine Air-Traffic-Controller Training
2017-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. DATABASE SYSTEM DESIGN AND...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC-CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This project focused on the design , development, and implementation of a centralized
Team X Spacecraft Instrument Database Consolidation
NASA Technical Reports Server (NTRS)
Wallenstein, Kelly A.
2005-01-01
In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.
Developing Visualization Support System for Teaching/Learning Database Normalization
ERIC Educational Resources Information Center
Folorunso, Olusegun; Akinwale, AdioTaofeek
2010-01-01
Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…
Applications of GIS and database technologies to manage a Karst Feature Database
Gao, Y.; Tipping, R.G.; Alexander, E.C.
2006-01-01
This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.
Age 60 Study. Part 1. Bibliographic Database
1994-10-01
seven of these aircraft types participated in a spectacle design study. Experimental spectacles were designed for each pilot and evaluated for...observation flight administered by observers who were uninformed of the details of the experimental design . Students and instructors also completed a critique...intraindividual lability in field-dependence-field independence, and (4) various measurement, sampling, and experimental design concerns associated
Salary Management System for Small and Medium-sized Enterprises
NASA Astrophysics Data System (ADS)
Hao, Zhang; Guangli, Xu; Yuhuan, Zhang; Yilong, Lei
Small and Medium-sized Enterprises (SMEs) in the process of wage entry, calculation, the total number are needed to be done manually in the past, the data volume is quite large, processing speed is low, and it is easy to make error, which is resulting in low efficiency. The main purpose of writing this paper is to present the basis of salary management system, establish a scientific database, the computer payroll system, using the computer instead of a lot of past manual work in order to reduce duplication of staff labor, it will improve working efficiency.This system combines the actual needs of SMEs, through in-depth study and practice of the C/S mode, PowerBuilder10.0 development tools, databases and SQL language, Completed a payroll system needs analysis, database design, application design and development work. Wages, departments, units and personnel database file are included in this system, and have data management, department management, personnel management and other functions, through the control and management of the database query, add, delete, modify, and other functions can be realized. This system is reasonable design, a more complete function, stable operation has been tested to meet the basic needs of the work.
GraQL: A Query Language for High-Performance Attributed Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro
Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
2017-10-01
author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation...the LTRC database comprised of nodules with a very high pretest probability of malignancy make these results encouraging as we are in the process of...working with the investigators to design the study, establish and support access to the clinical data and images of NLST and DECAMP, develop database
Ridyard, Colin H; Hughes, Dyfrig A
2012-01-01
Health economists frequently rely on methods based on patient recall to estimate resource utilization. Access to questionnaires and diaries, however, is often limited. This study examined the feasibility of establishing an open-access Database of Instruments for Resource-Use Measurement, identified relevant fields for data extraction, and outlined its design. An electronic survey was sent to authors of full UK economic evaluations listed in the National Health Service Economic Evaluation Database (2008-2010), authors of monographs of Health Technology Assessments (1998-2010), and subscribers to the JISCMail health economics e-mailing list. The survey included questions on piloting, validation, recall period, and data capture method. Responses were analyzed and data extracted to generate relevant fields for the database. A total of 143 responses to the survey provided data on 54 resource-use instruments for inclusion in the database. All were reliant on patient or carer recall, and a majority (47) were questionnaires. Thirty-seven were designed for self-completion by the patient, carer, or guardian, and the remainder were designed for completion by researchers or health care professionals while interviewing patients. Methods of development were diverse, particularly in areas such as the planning of resource itemization (evident in 25 instruments), piloting (25), and validation (29). On the basis of the present analysis, we developed a Web-enabled Database of Instruments for Resource-Use Measurement, accessible via www.DIRUM.org. This database may serve as a practical resource for health economists, as well as a means to facilitate further research in the area of resource-use data collection. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.
Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J
2007-01-01
Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.
New tools and methods for direct programmatic access to the dbSNP relational database.
Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
HEDS - EPA DATABASE SYSTEM FOR PUBLIC ACCESS TO HUMAN EXPOSURE DATA
Human Exposure Database System (HEDS) is an Internet-based system developed to provide public access to human-exposure-related data from studies conducted by EPA's National Exposure Research Laboratory (NERL). HEDS was designed to work with the EPA Office of Research and Devel...
NASA Technical Reports Server (NTRS)
Singh, M.
1999-01-01
Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.
ERIC Educational Resources Information Center
Irwin, Gretchen; Wessel, Lark; Blackman, Harvey
2012-01-01
This case describes a database redesign project for the United States Department of Agriculture's National Animal Germplasm Program (NAGP). The case provides a valuable context for teaching and practicing database analysis, design, and implementation skills, and can be used as the basis for a semester-long team project. The case demonstrates the…
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
Drabik, A; Sawicki, P T; Müller, D; Passon, A; Stock, S
2012-08-01
Disease management programmes (DMPs) were implemented in Germany in 2002. Their evaluation is required by law. Beyond the mandatory evaluation, a growing number of published studies evaluate the DMP for diabetes mellitus type 2 in a control-group design. As patients opt into the programme on a voluntary basis it is necessary to adjust the inherent selection bias between groups. The aim of this study is to review published studies which evaluate the diabetes DMP using a control-group design with respect to the methods used. A systematic literature review of electronic databases (PUBMED, Cochrane Library, EMBASE, MEDPILOT) and a hand search of reference lists of the relevant publications was conducted to identify studies evaluating the DMP diabetes mellitus in a control-group design. 8 studies were included in the systematic literature review. 4 studies gathered retrospective claims data from sickness funds, one from physician's records, one study used prospective data from ambulatory care, and 2 studies were based on one patient survey. Methods used for adjustment of selection bias included exact matching, matching using propensity score methods, age-adjusted and sex-separated analysis, and adjustment in a regression model/analysis of covariance. One study did not apply adjustment methods. The intervention period ranged from 1 day to 4 years. Considered outcomes of studies (surrogate parameter, diabetes complications, mortality, quality of life, and claim data) depended on the database. In the evaluation of the DMP diabetes mellitus based on a control-group design neither the database nor the methods used for selection bias adjustment were consistent in the available studies. Effectiveness of DMPs cannot be judged based on this review due to heterogeneity of study designs. To allow for a comprehensive programme evaluation standardised minimum requirements for the evaluation of DMPs in the control group design are required. © Georg Thieme Verlag KG Stuttgart · New York.
Design of a Multi Dimensional Database for the Archimed DataWarehouse.
Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine
2005-01-01
The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.
Reengineering a database for clinical trials management: lessons for system architects.
Brandt, C A; Nadkarni, P; Marenco, L; Karras, B T; Lu, C; Schacter, L; Fisk, J M; Miller, P L
2000-10-01
This paper describes the process of enhancing Trial/DB, a database system for clinical studies management. The system's enhancements have been driven by the need to maximize the effectiveness of developer personnel in supporting numerous and diverse users, of study designers in setting up new studies, and of administrators in managing ongoing studies. Trial/DB was originally designed to work over a local area network within a single institution, and basic architectural changes were necessary to make it work over the Internet efficiently as well as securely. Further, as its use spread to diverse communities of users, changes were made to let the processes of study design and project management adapt to the working styles of the principal investigators and administrators for each study. The lessons learned in the process should prove instructive for system architects as well as managers of electronic patient record systems.
Design and Implementation of an Intelligence Database.
1984-12-01
In designing SDM, many database aplications were analyzed in order to determine the structures that cc. i:r and recur in them...automatically, nor is it even known which relations can be converted to Di./NF. In spite of this, DK/NF can be exceedingly useful for practical database...goal of any design process is to produce qn output design, Sout, to accurately represent Sin. Further . all the relations in Sout must satisfy
Functional and Database Architecture Design.
1983-09-26
I AD-At3.N 275 FUNCTIONAL AND D ATABASE ARCHITECTURE DESIGN (U) ALPHA / OMEGA GROUP INC HARVARD MA 26 SEP 83 NODS 4-83-C 0525 UNCLASSIFIED FG52 N EE...0525 REPORT AOO1 FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Submitted to: Office of Naval Research Department of the Navy 800 N. Quincy Street...ZNTIS GRA& I DTIC TAB Unannounced 0 Justification REPORT ON Distribution/ Availability Codes Avail and/or FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Dist
Manasa, Justen; Lessells, Richard; Rossouw, Theresa; Naidu, Kevindra; Van Vuuren, Cloete; Goedhals, Dominique; van Zyl, Gert; Bester, Armand; Skingsley, Andrew; Stott, Katharine; Danaviah, Siva; Chetty, Terusha; Singh, Lavanya; Moodley, Pravi; Iwuji, Collins; McGrath, Nuala; Seebregts, Christopher J.; de Oliveira, Tulio
2014-01-01
Abstract Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net. Database URL: http://www.bioafrica.net/regadb/ PMID:24504151
Cost and Search Result Comparisons of BRS After Dark and Knowledge Index.
ERIC Educational Resources Information Center
Cloud, Gayla Staples; Hambric, Jacqueline
This two-part study was designed (1) to determine differences in the costs of searching BRS After Dark (BRS AD) and Knowledge Index (KI) generally and across ten selected databases, and (2) to determine whether there is a difference in the citations retrieved when the same search is conducted on the same database in both systems. Study methodology…
Searching Databases without Query-Building Aids: Implications for Dyslexic Users
ERIC Educational Resources Information Center
Berget, Gerd; Sandnes, Frode Eika
2015-01-01
Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…
NASA Technical Reports Server (NTRS)
Finley, Gail T.
1988-01-01
This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.
ERIC Educational Resources Information Center
Jensen, Chad D.; Cushing, Christopher C.; Aylward, Brandon S.; Craig, James T.; Sorell, Danielle M.; Steele, Ric G.
2011-01-01
Objective: This study was designed to quantitatively evaluate the effectiveness of motivational interviewing (MI) interventions for adolescent substance use behavior change. Method: Literature searches of electronic databases were undertaken in addition to manual reference searches of identified review articles. Databases searched include…
Efthimiadis, E N; Afifi, M
1996-01-01
OBJECTIVES: This study examined methods of accessing (for indexing and retrieval purposes) medical research on population groups in the major abstracting and indexing services of the health sciences literature. DESIGN: The study of diseases in specific population groups is facilitated by the indexing of both diseases and populations in a database. The MEDLINE, PsycINFO, and Embase databases were selected for the study. The published thesauri for these databases were examined to establish the vocabulary in use. Indexing terms were identified and examined as to their representation in the current literature. Terms were clustered further into groups thought to reflect an end user's perspective and to facilitate subsequent analysis. The medical literature contained in the three online databases was searched with both controlled vocabulary and natural language terms. RESULTS: The three thesauri revealed shallow pre-coordinated hierarchical structures, rather difficult-to-use terms for post-coordination, and a blurring of cultural, genetic, and racial facets of populations. Post-coordination is difficult because of the system-oriented terminology, which is intended mostly for information professionals. The terminology unintentionally restricts access by the end users who lack the knowledge needed to use the thesauri effectively for information retrieval. CONCLUSIONS: Population groups are not represented adequately in the index languages of health sciences databases. Users of these databases need to be alerted to the difficulties that may be encountered in searching for information on population groups. Information and health professionals may not be able to access the literature if they are not familiar with the indexing policies on population groups. Consequently, the study points to a problem that needs to be addressed, through either the redesign of existing systems or the design of new ones to meet the goals of Healthy People 2000 and beyond. PMID:8883987
16 CFR § 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
In-Space Manufacturing Baseline Property Development
NASA Technical Reports Server (NTRS)
Stockman, Tom; Schneider, Judith; Prater, Tracie; Bean, Quincy; Werkheiser, Nicki
2016-01-01
The In-Space Manufacturing (ISM) project at NASA Marshall Space Flight Center currently operates a 3D FDM (fused deposition modeling) printer onboard the International Space Station. In order to enable utilization of this capability by designer, the project needs to establish characteristic material properties for materials produced using the process. This is difficult for additive manufacturing since standards and specifications do not yet exist for these technologies. Due to availability of crew time, there are limitations to the sample size which in turn limits the application of the traditional design allowables approaches to develop a materials property database for designers. In this study, various approaches to development of material databases were evaluated for use by designers of space systems who wish to leverage in-space manufacturing capabilities. This study focuses on alternative statistical techniques for baseline property development to support in-space manufacturing.
Space transfer vehicle concepts and requirements study, phase 2
NASA Technical Reports Server (NTRS)
Cannon, Jeffrey H.; Vinopal, Tim; Andrews, Dana; Richards, Bill; Weber, Gary; Paddock, Greg; Maricich, Peter; Bouton, Bruce; Hagen, Jim; Kolesar, Richard
1992-01-01
This final report is a compilation of the Phase 1 and Phase 2 study findings and is intended as a Space Transfer Vehicle (STV) 'users guide' rather than an exhaustive explanation of STV design details. It provides a database for design choices in the general areas of basing, reusability, propulsion, and staging; with selection criteria based on cost, performance, available infrastructure, risk, and technology. The report is organized into the following three parts: (1) design guide; (2) STV Phase 1 Concepts and Requirements Study Summary; and (3) STV Phase 2 Concepts and Requirements Study Summary. The overall objectives of the STV study were to: (1) define preferred STV concepts capable of accommodating future exploration missions in a cost-effective manner; (2) determine the level of technology development required to perform these missions in the most cost effective manner; and (3) develop a decision database of programmatic approaches for the development of an STV concept.
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
[Discussion of the implementation of MIMIC database in emergency medical study].
Li, Kaiyuan; Feng, Cong; Jia, Lijing; Chen, Li; Pan, Fei; Li, Tanshi
2018-05-01
To introduce Medical Information Mart for Intensive Care (MIMIC) database and elaborate the approach of critically emergent research with big data based on the feature of MIMIC and updated studies both domestic and overseas, we put forward the feasibility and necessity of introducing medical big data to research in emergency. Then we discuss the role of MIMIC database in emergency clinical study, as well as the principles and key notes of experimental design and implementation under the medical big data circumstance. The implementation of MIMIC database in emergency medical research provides a brand new field for the early diagnosis, risk warning and prognosis of critical illness, however there are also limitations. To meet the era of big data, emergency medical database which is in accordance with our national condition is needed, which will provide new energy to the development of emergency medicine.
Organizing a breast cancer database: data management.
Yi, Min; Hunt, Kelly K
2016-06-01
Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.
Bitsch, A; Jacobi, S; Melber, C; Wahnschaffe, U; Simetska, N; Mangelsdorf, I
2006-12-01
A database for repeated dose toxicity data has been developed. Studies were selected by data quality. Review documents or risk assessments were used to get a pre-screened selection of available valid data. The structure of the chemicals should be rather simple for well defined chemical categories. The database consists of three core data sets for each chemical: (1) structural features and physico-chemical data, (2) data on study design, (3) study results. To allow consistent queries, a high degree of standardization categories and glossaries were developed for relevant parameters. At present, the database consists of 364 chemicals investigated in 1018 studies which resulted in a total of 6002 specific effects. Standard queries have been developed, which allow analyzing the influence of structural features or PC data on LOELs, target organs and effects. Furthermore, it can be used as an expert system. First queries have shown that the database is a very valuable tool.
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
Database constraints applied to metabolic pathway reconstruction tools.
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.
Teachers as Designers: Multimodal Immersion and Strategic Reading on the Internet
ERIC Educational Resources Information Center
Dalton, Bridget; Smith, Blaine E.
2012-01-01
This study examined teachers' literacy and technology integration in their design of Internet-based lessons for Grade 1-6 students using a tool that scaffolds the design process to focus on Internet resources and reading strategies. Twenty-six teachers' lessons on a public database were analyzed for design orientation, goals, curricular…
Burstyn, I; Kromhout, H; Cruise, P J; Brennan, P
2000-01-01
The objective of this project was to construct a database of exposure measurements which would be used to retrospectively assess the intensity of various exposures in an epidemiological study of cancer risk among asphalt workers. The database was developed as a stand-alone Microsoft Access 2.0 application, which could work in each of the national centres. Exposure data included in the database comprised measurements of exposure levels, plus supplementary information on production characteristics which was analogous to that used to describe companies enrolled in the study. The database has been successfully implemented in eight countries, demonstrating the flexibility and data security features adequate to the task. The database allowed retrieval and consistent coding of 38 data sets of which 34 have never been described in peer-reviewed scientific literature. We were able to collect most of the data intended. As of February 1999 the database consisted of 2007 sets of measurements from persons or locations. The measurements appeared to be free from any obvious bias. The methodology embodied in the creation of the database can be usefully employed to develop exposure assessment tools in epidemiological studies.
ERIC Educational Resources Information Center
Spink, Amanda
1995-01-01
This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)
The Cocoa Shop: A Database Management Case
ERIC Educational Resources Information Center
Pratt, Renée M. E.; Smatt, Cindi T.
2015-01-01
This is an example of a real-world applicable case study, which includes background information on a small local business (i.e., TCS), description of functional business requirements, and sample data. Students are asked to design and develop a database to improve the management of the company's customers, products, and purchases by emphasizing…
77 FR 58383 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-20
...) The Kids' Inpatient Database (KID) is the only all-payer inpatient care database for children in the United States. The KID was specifically designed to permit researchers to study a broad range of conditions and procedures related to child health issues. The KID contains a sample of over 3 million...
Introducing the Infant Bookreading Database (IBDb)
ERIC Educational Resources Information Center
Hudson Kam, Carla L.; Matthewson, Lisa
2017-01-01
Studies on the relationship between bookreading and language development typically lack data about which books are actually read to children. This paper reports on an Internet survey designed to address this data gap. The resulting dataset (the Infant Bookreading Database or IBDb) includes responses from 1,107 caregivers of children aged 0-36…
New tools and methods for direct programmatic access to the dbSNP relational database
Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260
Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)
Granato, Gregory E.
2004-01-01
In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.
Giffen, Sarah E.
2002-01-01
An environmental database was developed to store water-quality data collected during the 1999 U.S. Geological Survey investigation of the occurrence and distribution of dioxins, furans, and PCBs in the riverbed sediment and fish tissue in the Penobscot River in Maine. The database can be used to store a wide range of detailed information and to perform complex queries on the data it contains. The database also could be used to store data from other historical and any future environmental studies conducted on the Penobscot River and surrounding regions.
[Design of computerised database for clinical and basic management of uveal melanoma].
Bande Rodríguez, M F; Santiago Varela, M; Blanco Teijeiro, M J; Mera Yañez, P; Pardo Perez, M; Capeans Tome, C; Piñeiro Ces, A
2012-09-01
The uveal melanoma is the most common primary intraocular tumour in adults. The objective of this work is to show how a computerised database has been formed with specific applications, for clinical and research use, to an extensive group of patients diagnosed with uveal melanoma. For the design of the database a selection of categories, attributes and values was created based on the classifications and parameters given by various authors of articles which have had great relevance in the field of uveal melanoma in recent years. The database has over 250 patient entries with specific information on their clinical history, diagnosis, treatment and progress. It enables us to search any parameter of the entry and make quick and simple statistical studies of them. The database models have been transformed into a basic tool for clinical practice, as they are an efficient way of storing, compiling and selective searching of information. When creating a database it is very important to define a common strategy and the use of a standard language. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.
Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning
2007-01-01
The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
NASA Astrophysics Data System (ADS)
Cavaleri, Tiziana; Buscaglia, Paola; Migliorini, Simonetta; Nervo, Marco; Piccablotto, Gabriele; Piccirillo, Anna; Pisani, Marco; Puglisi, Davide; Vaudan, Dario; Zucco, Massimo
2017-06-01
The conservation of artworks requires a profound knowledge about pictorial materials, their chemical and physical properties and their interaction and/or degradation processes. For this reason, pictorial materials databases are widely used to study and investigate cultural heritage. At Centre for Conservation and Restoration La Venaria Reale, we prepared a set of about 1200 mock-ups with 173 different pigments and/or dyes, used across all the historical times or as products for conservation, four binders, two varnishes and four different materials for underdrawings. In collaboration with the Laboratorio Analisi Scientifiche of Regione Autonoma Valle d'Aosta, the National Institute of Metrological Research and the Department of Architecture and Design of the Polytechnic of Turin, we created a scientific database that is now available online (http://www.centrorestaurovenaria.it/en/areas/diagnostic/pictorial-materials-database) designed as a tool for heritage science and conservation. Here, we present a focus on materials for pictorial retouching where the hyperspectral imaging application, conducted with a prototype of new technology, allowed to provide a list of pigments that could be more suitable for conservation treatments and pictorial retouching. Then we present the case study of the industrial painting Notte Barbara (1962) by Pinot Gallizio where the use of the database including modern and contemporary art materials showed to be very useful and where the fibre optics reflectance spectroscopy technique was decisive for pigment identification purpose. Later in this research, the mock-ups will be exploited to study degradation processes, e.g., the lightfastness, or the possible formation of interaction products, e.g., metal carboxylates.
Challenges in Database Design with Microsoft Access
ERIC Educational Resources Information Center
Letkowski, Jerzy
2014-01-01
Design, development and explorations of databases are popular topics covered in introductory courses taught at business schools. Microsoft Access is the most popular software used in those courses. Despite quite high complexity of Access, it is considered to be one of the most friendly database programs for beginners. A typical Access textbook…
A Graphical Database Interface for Casual, Naive Users.
ERIC Educational Resources Information Center
Burgess, Clifford; Swigger, Kathleen
1986-01-01
Describes the design of a database interface for infrequent users of computers which consists of a graphical display of a model of a database and a natural language query language. This interface was designed for and tested with physicians at the University of Texas Health Science Center in Dallas. (LRW)
Investigation of IGES for CAD/CAE data transfer
NASA Technical Reports Server (NTRS)
Zobrist, George W.
1989-01-01
In a CAD/CAE facility there is always the possibility that one may want to transfer the design graphics database from the native system to a non-native system. This may occur because of dissimilar systems within an organization or a new CAD/CAE system is to be purchased. The Initial Graphics Exchange Specification (IGES) was developed in an attempt to solve this scenario. IGES is a neutral database format into which the CAD/CAE native database format can be translated to and from. Translating the native design database format to IGES requires a pre-processor and transling from IGES to the native database format requires a post-processor. IGES is an artifice to represent CAD/CAE product data in a neutral environment to allow interfacing applications, archive the database, interchange of product data between dissimilar CAD/CAE systems, and other applications. The intent here is to present test data on translating design product data from a CAD/CAE system to itself and to translate data initially prepared in IGES format to various native design formats. This information can be utilized in planning potential procurement and developing a design discipline within the CAD/CAE community.
Lunar base Controlled Ecological Life Support System (LCELSS): Preliminary conceptual design study
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.
1991-01-01
The objective of this study was to develop a conceptual design for a self-sufficient LCELSS. The mission need is for a CELSS with a capacity to supply the life support needs for a nominal crew of 30, and a capability for accommodating a range of crew sizes from 4 to 100 people. The work performed in this study was nominally divided into two parts. In the first part, relevant literature was assembled and reviewed. This review identified LCELSS performance requirements and the constraints and advantages confronting the design. It also collected information on the environment of the lunar surface and identified candidate technologies for the life support subsystems and the systems with which the LCELSS interfaced. Information on the operation and performance of these technologies was collected, along with concepts of how they might be incorporated into the LCELSS conceptual design. The data collected on these technologies was stored for incorporation into the study database. Also during part one, the study database structure was formulated and implemented, and an overall systems engineering methodology was developed for carrying out the study.
NSWC Crane Aerospace Cell Test History Database
NASA Technical Reports Server (NTRS)
Brown, Harry; Moore, Bruce
1994-01-01
The Aerospace Cell Test History Database was developed to provide project engineers and scientists ready access to the data obtained from testing of aerospace cell designs at Naval Surface Warfare Center, Crane Division. The database is intended for use by all aerospace engineers and scientists involved in the design of power systems for satellites. Specifically, the database will provide a tool for project engineers to review the progress of their test at Crane and to have ready access to data for evaluation. Additionally, the database will provide a history of test results that designers can draw upon to answer questions about cell performance under certain test conditions and aid in selection of a cell for a satellite battery. Viewgraphs are included.
A novel database of bio-effects from non-ionizing radiation.
Leach, Victor; Weller, Steven; Redmayne, Mary
2018-06-06
A significant amount of electromagnetic field/electromagnetic radiation (EMF/EMR) research is available that examines biological and disease associated endpoints. The quantity, variety and changing parameters in the available research can be challenging when undertaking a literature review, meta-analysis, preparing a study design, building reference lists or comparing findings between relevant scientific papers. The Oceania Radiofrequency Scientific Advisory Association (ORSAA) has created a comprehensive, non-biased, multi-categorized, searchable database of papers on non-ionizing EMF/EMR to help address these challenges. It is regularly added to, freely accessible online and designed to allow data to be easily retrieved, sorted and analyzed. This paper demonstrates the content and search flexibility of the ORSAA database. Demonstration searches are presented by Effect/No Effect; frequency-band/s; in vitro; in vivo; biological effects; study type; and funding source. As of the 15th September 2017, the clear majority of 2653 papers captured in the database examine outcomes in the 300 MHz-3 GHz range. There are 3 times more biological "Effect" than "No Effect" papers; nearly a third of papers provide no funding statement; industry-funded studies more often than not find "No Effect", while institutional funding commonly reveal "Effects". Country of origin where the study is conducted/funded also appears to have a dramatic influence on the likely result outcome.
Designing a User Manual to Support an In-House Database.
ERIC Educational Resources Information Center
Kraft, Melissa A.; Pugh, W. Jean
1988-01-01
Describes the steps involved in designing a user manual for an in-house database. Topics covered include goal definition, target audience identification, production scheduling, design and production choices, testing and review, and updating of the manual. (CLB)
A rudimentary database for three-dimensional objects using structural representation
NASA Technical Reports Server (NTRS)
Sowers, James P.
1987-01-01
A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.
Research on high availability architecture of SQL and NoSQL
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao
2017-03-01
With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.
Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter
2009-01-01
A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...
Griffith, B C; White, H D; Drott, M C; Saye, J D
1986-07-01
This article reports on five separate studies designed for the National Library of Medicine (NLM) to develop and test methodologies for evaluating the products of large databases. The methodologies were tested on literatures of the medical behavioral sciences (MBS). One of these studies examined how well NLM covered MBS monographic literature using CATLINE and OCLC. Another examined MBS journal and serial literature coverage in MEDLINE and other MBS-related databases available through DIALOG. These two studies used 1010 items derived from the reference lists of sixty-one journals, and tested for gaps and overlaps in coverage in the various databases. A third study examined the quality of the indexing NLM provides to MBS literatures and developed a measure of indexing as a system component. The final two studies explored how well MEDLINE retrieved documents on topics submitted by MBS professionals and how online searchers viewed MEDLINE (and other systems and databases) in handling MBS topics. The five studies yielded both broad research outcomes and specific recommendations to NLM.
Video Games for Diabetes Self-Management: Examples and Design Strategies
Lieberman, Debra A.
2012-01-01
The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called “Serious Games for Diabetes, Obesity, and Healthy Lifestyle.” As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players’ diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms “diabetes” and “game.” Games were selected if they addressed diabetes self-management skills. PMID:22920805
Video games for diabetes self-management: examples and design strategies.
Lieberman, Debra A
2012-07-01
The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called "Serious Games for Diabetes, Obesity, and Healthy Lifestyle." As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players' diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms "diabetes" and "game." Games were selected if they addressed diabetes self-management skills. © 2012 Diabetes Technology Society.
Design of Knowledge Bases for Plant Gene Regulatory Networks.
Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich
2017-01-01
Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.
A Community Data Model for Hydrologic Observations
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.
2006-12-01
The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.
Robinson, William P
2017-12-01
Ruptured abdominal aortic aneurysm is one of the most difficult clinical problems in surgical practice, with extraordinarily high morbidity and mortality. During the past 23 years, the literature has become replete with reports regarding ruptured endovascular aneurysm repair. A variety of study designs and databases have been utilized to compare ruptured endovascular aneurysm repair and open surgical repair for ruptured abdominal aortic aneurysm and studies of various designs from different databases have yielded vastly different conclusions. It therefore remains controversial whether ruptured endovascular aneurysm repair improves outcomes after ruptured abdominal aortic aneurysm in comparison to open surgical repair. The purpose of this article is to review the best available evidence comparing ruptured endovascular aneurysm repair and open surgical repair of ruptured abdominal aortic aneurysm, including single institution and multi-institutional retrospective observational studies, large national population-based studies, large national registries of prospectively collected data, and randomized controlled clinical trials. This article will analyze the study designs and databases utilized with their attendant strengths and weaknesses to understand the sometimes vastly different conclusions the studies have reached. This article will attempt to integrate the data to distill some of the lessons that have been learned regarding ruptured endovascular aneurysm repair and identify ongoing needs in this field. Copyright © 2017 Elsevier Inc. All rights reserved.
Jeffries, D J; Donkor, S; Brookes, R H; Fox, A; Hill, P C
2004-09-01
The data requirements of a large multidisciplinary tuberculosis case contact study are complex. We describe an ACCESS-based relational database system that meets our rigorous requirements for data entry and validation, while being user-friendly, flexible, exportable, and easy to install on a network or stand alone system. This includes the development of a double data entry package for epidemiology and laboratory data, semi-automated entry of ELISPOT data directly from the plate reader, and a suite of new programmes for the manipulation and integration of flow cytometry data. The double entered epidemiology and immunology databases are combined into a separate database, providing a near-real-time analysis of immuno-epidemiological data, allowing important trends to be identified early and major decisions about the study to be made and acted on. This dynamic data management model is portable and can easily be applied to other studies.
HRGFish: A database of hypoxia responsive genes in fishes
NASA Astrophysics Data System (ADS)
Rashid, Iliyas; Nagpure, Naresh Sahebrao; Srivastava, Prachi; Kumar, Ravindra; Pathak, Ajey Kumar; Singh, Mahender; Kushwaha, Basdeo
2017-02-01
Several studies have highlighted the changes in the gene expression due to the hypoxia response in fishes, but the systematic organization of the information and the analytical platform for such genes are lacking. In the present study, an attempt was made to develop a database of hypoxia responsive genes in fishes (HRGFish), integrated with analytical tools, using LAMPP technology. Genes reported in hypoxia response for fishes were compiled through literature survey and the database presently covers 818 gene sequences and 35 gene types from 38 fishes. The upstream fragments (3,000 bp), covered in this database, enables to compute CG dinucleotides frequencies, motif finding of the hypoxia response element, identification of CpG island and mapping with the reference promoter of zebrafish. The database also includes functional annotation of genes and provides tools for analyzing sequences and designing primers for selected gene fragments. This may be the first database on the hypoxia response genes in fishes that provides a workbench to the scientific community involved in studying the evolution and ecological adaptation of the fish species in relation to hypoxia.
Database Constraints Applied to Metabolic Pathway Reconstruction Tools
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database that...
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T
2010-10-01
Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.
Database systems for knowledge-based discovery.
Jagarlapudi, Sarma A R P; Kishan, K V Radha
2009-01-01
Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.
The database design of LAMOST based on MYSQL/LINUX
NASA Astrophysics Data System (ADS)
Li, Hui-Xian, Sang, Jian; Wang, Sha; Luo, A.-Li
2006-03-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) will be set up in the coming years. A fully automated software system for reducing and analyzing the spectra has to be developed with the telescope. This database system is an important part of the software system. The requirements for the database of the LAMOST, the design of the LAMOST database system based on MYSQL/LINUX and performance tests of this system are described in this paper.
Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F
2011-01-01
Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.; Dreim, D.; Okeefe, D. A.; Linner, L.; Pathak, S. K.; Reaser, J. S.; Richardson, D.; Sweers, J.; Conner, F.
1985-01-01
Work performed in the design of a transport aircraft wing for maximum fuel efficiency is documented with emphasis on design criteria, design methodology, and three design configurations. The design database includes complete finite element model description, sizing data, geometry data, loads data, and inertial data. A design process which satisfies the economics and practical aspects of a real design is illustrated. The cooperative study relationship between the contractor and NASA during the course of the contract is also discussed.
ERIC Educational Resources Information Center
Chavez-Gibson, Sarah
2013-01-01
The purpose of this study is to exam in-depth, the Comprehensive, Powerful, Academic Database (CPAD), a data decision-making tool that determines and identifies students at-risk of dropping out of school, and how the CPAD assists administrators and teachers at an elementary campus to monitor progress, curriculum, and performance to improve student…
Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C
2008-01-07
The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints
ERIC Educational Resources Information Center
Philip, George C.
2007-01-01
This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…
Computerized Design Synthesis (CDS), A database-driven multidisciplinary design tool
NASA Technical Reports Server (NTRS)
Anderson, D. M.; Bolukbasi, A. O.
1989-01-01
The Computerized Design Synthesis (CDS) system under development at McDonnell Douglas Helicopter Company (MDHC) is targeted to make revolutionary improvements in both response time and resource efficiency in the conceptual and preliminary design of rotorcraft systems. It makes the accumulated design database and supporting technology analysis results readily available to designers and analysts of technology, systems, and production, and makes powerful design synthesis software available in a user friendly format.
Famulari, Stevie; Witz, Kyla
2015-01-01
Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.
The Primate Life History Database: A unique shared ecological data resource
Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.
2011-01-01
Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066
Teaching Case: Adapting the Access Northwind Database to Support a Database Course
ERIC Educational Resources Information Center
Dyer, John N.; Rogers, Camille
2015-01-01
A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…
A Living Laboratory for Energy Systems Integration - Continuum Magazine |
research centers across NREL to study how to optimize the campus's energy use. The Energy DataBus The , at second-by-second intervals, 24 hours per day, and stores it all in one giant database. And the use solution that is designed for large, scalable databases. "It's similar to the one that Facebook and
ERIC Educational Resources Information Center
Oduwole, A. A.; Sowole, A. O.
2006-01-01
Purpose: This study examined the utilisation of the Essential Electronic Agricultural Library database (TEEAL) at the University of Agriculture Library, Abeokuta, Nigeria. Design/methodology/approach: Data collection was by questionnaire following a purposive sampling technique. A total of 104 out 150 (69.3 per cent) responses were received and…
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jy-An John
Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regardingmore » Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.« less
Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio
2015-03-01
In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
Development of flexible pavement database for local calibration of MEPDG : volume 1.
DOT National Transportation Integrated Search
2011-06-01
The new mechanistic-empirical pavement design guide (MEPDG), based on the National Cooperative Highway : Research Program (NCHRP) study 1-37A, replaces the widely used but more empirical 1993 AASHTO Guide : for Design of Pavement Structures. The MEPD...
Automated Database Schema Design Using Mined Data Dependencies.
ERIC Educational Resources Information Center
Wong, S. K. M.; Butz, C. J.; Xiang, Y.
1998-01-01
Describes a bottom-up procedure for discovering multivalued dependencies in observed data without knowing a priori the relationships among the attributes. The proposed algorithm is an application of technique designed for learning conditional independencies in probabilistic reasoning; a prototype system for automated database schema design has…
The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.
Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil
2016-06-17
Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.
Ab Initio Design of Potent Anti-MRSA Peptides based on Database Filtering Technology
Mishra, Biswajit; Wang, Guangshun
2012-01-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed.1 This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g. amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database2 by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 minutes. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. A combination of our ab initio design with database screening3 led to yet another peptide with enhanced potency. Because of simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well. PMID:22803960
Ab initio design of potent anti-MRSA peptides based on database filtering technology.
Mishra, Biswajit; Wang, Guangshun
2012-08-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed. This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when the most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g., amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 min. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. Our ab initio design combined with database screening led to yet another peptide with enhanced potency. Because of the simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well.
Schell, Scott R
2006-02-01
Enforcement of the Health Insurance Portability and Accountability Act (HIPAA) began in April, 2003. Designed as a law mandating health insurance availability when coverage was lost, HIPAA imposed sweeping and broad-reaching protections of patient privacy. These changes dramatically altered clinical research by placing sizeable regulatory burdens upon investigators with threat of severe and costly federal and civil penalties. This report describes development of an algorithmic approach to clinical research database design based upon a central key-shared data (CK-SD) model allowing researchers to easily analyze, distribute, and publish clinical research without disclosure of HIPAA Protected Health Information (PHI). Three clinical database formats (small clinical trial, operating room performance, and genetic microchip array datasets) were modeled using standard structured query language (SQL)-compliant databases. The CK database was created to contain PHI data, whereas a shareable SD database was generated in real-time containing relevant clinical outcome information while protecting PHI items. Small (< 100 records), medium (< 50,000 records), and large (> 10(8) records) model databases were created, and the resultant data models were evaluated in consultation with an HIPAA compliance officer. The SD database models complied fully with HIPAA regulations, and resulting "shared" data could be distributed freely. Unique patient identifiers were not required for treatment or outcome analysis. Age data were resolved to single-integer years, grouping patients aged > 89 years. Admission, discharge, treatment, and follow-up dates were replaced with enrollment year, and follow-up/outcome intervals calculated eliminating original data. Two additional data fields identified as PHI (treating physician and facility) were replaced with integer values, and the original data corresponding to these values were stored in the CK database. Use of the algorithm at the time of database design did not increase cost or design effort. The CK-SD model for clinical database design provides an algorithm for investigators to create, maintain, and share clinical research data compliant with HIPAA regulations. This model is applicable to new projects and large institutional datasets, and should decrease regulatory efforts required for conduct of clinical research. Application of the design algorithm early in the clinical research enterprise does not increase cost or the effort of data collection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calabrese, Edward J.; Blain, Robyn
A relational retrieval database has been developed compiling toxicological studies assessing the occurrence of hormetic dose responses and their quantitative characteristics. This database permits an evaluation of these studies over numerous parameters, including study design and dose-response features and physical/chemical properties of the agents. The database contains approximately 5600 dose-response relationships satisfying evaluative criteria for hormesis across over approximately 900 agents from a broadly diversified spectrum of chemical classes and physical agents. The assessment reveals that hormetic dose-response relationships occur in males and females of numerous animal models in all principal age groups as well as across species displaying amore » broad range of differential susceptibilities to toxic agents. The biological models are extensive, including plants, viruses, bacteria, fungi, insects, fish, birds, rodents, and primates, including humans. The spectrum of endpoints displaying hormetic dose responses is also broad being inclusive of growth, longevity, numerous metabolic parameters, disease incidences (including cancer), various performance endpoints such as cognitive functions, immune responses among others. Quantitative features of the hormetic dose response reveal that the vast majority of cases display a maximum stimulatory response less than two-fold greater than the control while the width of the stimulatory response is typically less than 100-fold in dose range immediately contiguous with the toxicological NO(A)EL. The database also contains a quantitative evaluation component that differentiates among the various dose responses concerning the strength of the evidence supporting a hormetic conclusion based on study design features, magnitude of the stimulatory response, statistical significance, and reproducibility of findings.« less
Calabrese, Edward J; Blain, Robyn
2005-02-01
A relational retrieval database has been developed compiling toxicological studies assessing the occurrence of hormetic dose responses and their quantitative characteristics. This database permits an evaluation of these studies over numerous parameters, including study design and dose-response features and physical/chemical properties of the agents. The database contains approximately 5600 dose-response relationships satisfying evaluative criteria for hormesis across over approximately 900 agents from a broadly diversified spectrum of chemical classes and physical agents. The assessment reveals that hormetic dose-response relationships occur in males and females of numerous animal models in all principal age groups as well as across species displaying a broad range of differential susceptibilities to toxic agents. The biological models are extensive, including plants, viruses, bacteria, fungi, insects, fish, birds, rodents, and primates, including humans. The spectrum of endpoints displaying hormetic dose responses is also broad being inclusive of growth, longevity, numerous metabolic parameters, disease incidences (including cancer), various performance endpoints such as cognitive functions, immune responses among others. Quantitative features of the hormetic dose response reveal that the vast majority of cases display a maximum stimulatory response less than two-fold greater than the control while the width of the stimulatory response is typically less than 100-fold in dose range immediately contiguous with the toxicological NO(A)EL. The database also contains a quantitative evaluation component that differentiates among the various dose responses concerning the strength of the evidence supporting a hormetic conclusion based on study design features, magnitude of the stimulatory response, statistical significance, and reproducibility of findings.
[Developmental status and prospect of musical electroacupuncture].
Wang, Fan; Xu, Chun-Lan; Dong, Gui-Rong; Dong, Hong-Sheng
2014-12-01
Through searching domestic and foreign medical journals in CNKI, Wanfang database, VIP database and Pubmed database from January of 2003 to November of 2013, 39 articles regarding musical electroacupuncture (MEA) were analyzed. The result showed that MEA was clinically used to treat neurological and psychotic disorders; because it was combined with musical therapy and overcame the acupuncture tolerability, and MEA was superior to traditional electroacupuncture. However, problems such as low research efficiency and the mechanism of MEA superiority and the musical specificity not being revealed by research design still exist. In future, large-sample multi-center RCT researches should be performed to clarify MEA clinical efficacy. With modern science and technology and optimized study design, guided by five-element theory of TCM, researches on different musical elements and characteristics of musical pulse current as well as MEA's correlation with meridians and organs should be studied, so as to make a further exploration on MEA mechanisms and broaden the range of its clinical application.
ERIC Educational Resources Information Center
National Center on Outcomes Research, Council on Quality and Leadership, Towson, MD.
This report describes the genesis, definition and use of the Personal Outcomes database, a database designed to assess whether programs and services are being effective in helping individuals with disabilities. The database is based on 25 outcome measures in seven domains, including: (1) identity, which is designed to provide a sense of how people…
Pediatric burns: Kids' Inpatient Database vs the National Burn Repository.
Soleimani, Tahereh; Evans, Tyler A; Sood, Rajiv; Hartman, Brett C; Hadad, Ivan; Tholpady, Sunil S
2016-04-01
Burn injuries are one of the leading causes of morbidity and mortality in young children. The Kids' Inpatient Database (KID) and National Burn Repository (NBR) are two large national databases that can be used to evaluate outcomes and help quality improvement in burn care. Differences in the design of the KID and NBR could lead to differing results affecting resultant conclusions and quality improvement programs. This study was designed to validate the use of KID for burn epidemiologic studies, as an adjunct to the NBR. Using the KID (2003, 2006, and 2009), a total of 17,300 nonelective burn patients younger than 20 y old were identified. Data from 13,828 similar patients were collected from the NBR. Outcome variables were compared between the two databases. Comparisons revealed similar patient distribution by gender, race, and burn size. Inhalation injury was more common among the NBR patients and was associated with increased mortality. The rates of respiratory failure, wound infection, cellulitis, sepsis, and urinary tract infection were higher in the KID. Multiple regression analysis adjusting for potential confounders demonstrated similar mortality rate but significantly longer length of stay for patients in the NBR. Despite differences in the design and sampling of the KID and NBR, the overall demographic and mortality results are similar. The differences in complication rate and length of stay should be explored by further studies to clarify underlying causes. Investigations into these differences should also better inform strategies to improve burn prevention and treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
The ID Database: Managing the Instructional Development Process
ERIC Educational Resources Information Center
Piña, Anthony A.; Sanford, Barry K.
2017-01-01
Management is evolving as a foundational domain to the field of instructional design and technology. However, there are few tools dedicated to the management of instructional design and development projects and activities. In this article, we describe the development, features and implementation of an instructional design database--built from a…
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
The Design and Implementation of Network Teaching Platform Basing on .NET
NASA Astrophysics Data System (ADS)
Yanna, Ren
This paper addresses the problem that students under traditional teaching model have poor operation ability and studies in depth the network teaching platform in domestic colleges and universities, proposing the design concept of network teaching platform of NET + C # + SQL excellent course and designing the overall structure, function module and back-end database of the platform. This paper emphatically expounds the use of MD5 encryption techniques in order to solve data security problems and the assessment of student learning using ADO.NET database access technology as well as the mathematical formula. The example shows that the network teaching platform developed by using WEB application technology has higher safety and availability, and thus improves the students' operation ability.
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...
Clinical study of the Erlanger silver catheter--data management and biometry.
Martus, P; Geis, C; Lugauer, S; Böswald, M; Guggenbichler, J P
1999-01-01
The clinical evaluation of venous catheters for catheter-induced infections must conform to a strict biometric methodology. The statistical planning of the study (target population, design, degree of blinding), data management (database design, definition of variables, coding), quality assurance (data inspection at several levels) and the biometric evaluation of the Erlanger silver catheter project are described. The three-step data flow included: 1) primary data from the hospital, 2) relational database, 3) files accessible for statistical evaluation. Two different statistical models were compared: analyzing the first catheter only of a patient in the analysis (independent data) and analyzing several catheters from the same patient (dependent data) by means of the generalized estimating equations (GEE) method. The main result of the study was based on the comparison of both statistical models.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.
2010-01-01
Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402
Repetitive Bibliographical Information in Relational Databases.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1988-01-01
Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)
ERIC Educational Resources Information Center
Cheung, Waiman; Li, Eldon Y.; Yee, Lester W.
2003-01-01
Metadatabase modeling and design integrate process modeling and data modeling methodologies. Both are core topics in the information technology (IT) curriculum. Learning these topics has been an important pedagogical issue to the core studies for management information systems (MIS) and computer science (CSc) students. Unfortunately, the learning…
Michaleff, Zoe A; Costa, Leonardo O P; Moseley, Anne M; Maher, Christopher G; Elkins, Mark R; Herbert, Robert D; Sherrington, Catherine
2011-02-01
Many bibliographic databases index research studies evaluating the effects of health care interventions. One study has concluded that the Physiotherapy Evidence Database (PEDro) has the most complete indexing of reports of randomized controlled trials of physical therapy interventions, but the design of that study may have exaggerated estimates of the completeness of indexing by PEDro. The purpose of this study was to compare the completeness of indexing of reports of randomized controlled trials of physical therapy interventions by 8 bibliographic databases. This study was an audit of bibliographic databases. Prespecified criteria were used to identify 400 reports of randomized controlled trials from the reference lists of systematic reviews published in 2008 that evaluated physical therapy interventions. Eight databases (AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO, and PubMed) were searched for each trial report. The proportion of the 400 trial reports indexed by each database was calculated. The proportions of the 400 trial reports indexed by the databases were as follows: CENTRAL, 95%; PEDro, 92%; PubMed, 89%; EMBASE, 88%; CINAHL, 53%; AMED, 50%; Hooked on Evidence, 45%; and PsycINFO, 6%. Almost all of the trial reports (99%) were found in at least 1 database, and 88% were indexed by 4 or more databases. Four trial reports were uniquely indexed by a single database only (2 in CENTRAL and 1 each in PEDro and PubMed). The results are only applicable to searching for English-language published reports of randomized controlled trials evaluating physical therapy interventions. The 4 most comprehensive databases of trial reports evaluating physical therapy interventions were CENTRAL, PEDro, PubMed, and EMBASE. Clinicians seeking quick answers to clinical questions could search any of these databases knowing that all are reasonably comprehensive. PEDro, unlike the other 3 most complete databases, is specific to physical therapy, so studies not relevant to physical therapy are less likely to be retrieved. Researchers could use CENTRAL, PEDro, PubMed, and EMBASE in combination to conduct exhaustive searches for randomized trials in physical therapy.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
ERIC Educational Resources Information Center
Feinberg, Daniel A.
2017-01-01
This study examined the supports that female students sought out and found of value in an online database design course in a health informatics master's program. A target outcome was to help inform the practice of faculty and administrators in similar programs. Health informatics is a growing field that has faced shortages of qualified workers who…
Glanville, Julie; Eyers, John; Jones, Andrew M; Shemilt, Ian; Wang, Grace; Johansen, Marit; Fiander, Michelle; Rothstein, Hannah
2017-09-01
This article reviews the available evidence and guidance on methods to identify reports of quasi-experimental (QE) studies to inform systematic reviews of health care, public health, international development, education, crime and justice, and social welfare. Research, guidance, and examples of search strategies were identified by searching a range of databases, key guidance documents, selected reviews, conference proceedings, and personal communication. Current practice and research evidence were summarized. Four thousand nine hundred twenty-four records were retrieved by database searches, and additional documents were obtained by other searches. QE studies are challenging to identify efficiently because they have no standardized nomenclature and may be indexed in various ways. Reliable search filters are not available. There is a lack of specific resources devoted to collecting QE studies and little evidence on where best to search. Searches to identify QE studies should search a range of resources and, until indexing improves, use strategies that focus on the topic rather than the study design. Better definitions, better indexing in databases, prospective registers, and reporting guidance are required to improve the retrieval of QE studies and promote systematic reviews of what works based on the evidence from such studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Big Data Research in Neurosurgery: A Critical Look at this Popular New Study Design.
Oravec, Chesney S; Motiwala, Mustafa; Reed, Kevin; Kondziolka, Douglas; Barker, Fred G; Michael, L Madison; Klimo, Paul
2018-05-01
The use of "big data" in neurosurgical research has become increasingly popular. However, using this type of data comes with limitations. This study aimed to shed light on this new approach to clinical research. We compiled a list of commonly used databases that were not specifically created to study neurosurgical procedures, conditions, or diseases. Three North American journals were manually searched for articles published since 2000 utilizing these and other non-neurosurgery-specific databases. A number of data points per article were collected, tallied, and analyzed.A total of 324 articles were identified since 2000 with an exponential increase since 2011 (257/324, 79%). The Journal of Neurosurgery Publishing Group published the greatest total number (n = 200). The National Inpatient Sample was the most commonly used database (n = 136). The average study size was 114 841 subjects (range, 30-4 146 777). The most prevalent topics were vascular (n = 77) and neuro-oncology (n = 66). When categorizing study objective (recognizing that many papers reported more than 1 type of study objective), "Outcomes" was the most common (n = 154). The top 10 institutions by primary or senior author accounted for 45%-50% of all publications. Harvard Medical School was the top institution, using this research technique with 59 representations (31 by primary author and 28 by senior).The increasing use of data from non-neurosurgery-specific databases presents a unique challenge to the interpretation and application of the study conclusions. The limitations of these studies must be more strongly considered in designing and interpreting these studies.
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
NASA Technical Reports Server (NTRS)
Huber, P. D.; Gallagher, J. P.
1994-01-01
This report describes the organization, format and content of the NASA Johnson damage tolerant database which was created to store damage tolerant property data for non aerospace structural materials. The database is designed to store fracture toughness data (K(sub IC), K(sub c), J(sub IC) and CTOD(sub IC)), resistance curve data (K(sub R) VS. delta a (sub eff) and JR VS. delta a (sub eff)), as well as subcritical crack growth data (a vs. N and da/dN vs. delta K). The database contains complementary material property data for both stainless and alloy steels, as well as for aluminum, nickel, and titanium alloys which were not incorporated into the Damage Tolerant Design Handbook database.
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-01-01
Objective. To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. Data Source/Study Setting. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Study Design. Convergent case study mixed methods design. Data Collection/Extraction Methods. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Principal Findings. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Conclusions. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. PMID:24279836
Learning about and Practice of Designing Local Data Bases as an Harmonizing Factor.
ERIC Educational Resources Information Center
Neelameghan, A.
This paper provides information workers with some practical approaches to the design, development, and use of local databases that form components of information storage and retrieval systems (ISR) and of automated library operations. Topics discussed include: (1) course objectives for the design and development of local databases for library and…
CHSIR Anthropometric Database, CHSIR Truncated Anthropometric Database, and Boundary Manikins
NASA Technical Reports Server (NTRS)
Rajulu, Sudhakar
2011-01-01
The NASA crew anthropometric dimensions that the Commercial Transportation System (CTS) must accommodate are listed in CCT-REQ-1130 Draft 3.0, with the specific critical anthropometric dimensions for use in vehicle design (and suit design in the event that a pressure suit is part of the commercial partner s design solution).
Design Evolution and Performance Characterization of the GTX Air-Breathing Launch Vehicle Inlet
NASA Technical Reports Server (NTRS)
DeBonis, J. R.; Steffen, C. J., Jr.; Rice, T.; Trefny, C. J.
2002-01-01
The design and analysis of a second version of the inlet for the GTX rocket-based combine-cycle launch vehicle is discussed. The previous design did not achieve its predicted performance levels due to excessive turning of low-momentum comer flows and local over-contraction due to asymmetric end-walls. This design attempts to remove these problems by reducing the spike half-angle to 10- from 12-degrees and by implementing true plane of symmetry end-walls. Axisymmetric Reynolds-Averaged Navier-Stokes simulations using both perfect gas and real gas, finite rate chemistry, assumptions were performed to aid in the design process and to create a comprehensive database of inlet performance. The inlet design, which operates over the entire air-breathing Mach number range from 0 to 12, and the performance database are presented. The performance database, for use in cycle analysis, includes predictions of mass capture, pressure recovery, throat Mach number, drag force, and heat load, for the entire Mach range. Results of the computations are compared with experimental data to validate the performance database.
Data Base Design Using Entity-Relationship Models.
ERIC Educational Resources Information Center
Davis, Kathi Hogshead
1983-01-01
The entity-relationship (ER) approach to database design is defined, and a specific example of an ER model (personnel-payroll) is examined. The requirements for converting ER models into specific database management systems are discussed. (Author/MSE)
of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005
[Establishment of database with standard 3D tooth crowns based on 3DS MAX].
Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun
2009-08-01
The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Tengfang; Piette, Mary Ann
2004-08-05
The original scope of work was to obtain and analyze existing and emerging data in four states: California, Florida, New York, and Wisconsin. The goal of this data collection was to deliver a baseline database or recommendations for such a database that could possibly contain window and daylighting features and energy performance characteristics of Kindergarten through 12th grade (K-12) school buildings (or those of classrooms when available). In particular, data analyses were performed based upon the California Commercial End-Use Survey (CEUS) databases to understand school energy use, features of window glazing, and availability of daylighting in California K-12 schools. Themore » outcomes from this baseline task can be used to assist in establishing a database of school energy performance, assessing applications of existing technologies relevant to window and daylighting design, and identifying future R&D needs. These are in line with the overall project goals as outlined in the proposal. Through the review and analysis of this data, it is clear that there are many compounding factors impacting energy use in K-12 school buildings in the U.S., and that there are various challenges in understanding the impact of K-12 classroom energy use associated with design features of window glazing and skylight. First, the energy data in the existing CEUS databases has, at most, provided the aggregated electricity and/or gas usages for the building establishments that include other school facilities on top of the classroom spaces. Although the percentage of classroom floor area in schools is often available from the databases, there is no additional information that can be used to quantitatively segregate the EUI for classroom spaces. In order to quantify the EUI for classrooms, sub-metering of energy usage by classrooms must be obtained. Second, magnitudes of energy use for electricity lighting are not attainable from the existing databases, nor are the lighting levels contributed by artificial lighting or daylight. It is impossible to reasonably estimate the lighting energy consumption for classroom areas in the sample of schools studied in this project. Third, there are many other compounding factors that may as well influence the overall classroom energy use, e.g., ventilation, insulation, system efficiency, occupancy, control, schedules, and weather. Fourth, although we have examined the school EUI grouped by various factors such as climate zones, window and daylighting design features from the California databases, no statistically significant associations can be identified from the sampled California K-12 schools in the current California CEUS. There are opportunities to expand such analyses by developing and including more powerful CEUS databases in the future. Finally, a list of parameters is recommended for future database development and for use of future investigation in K-12 classroom energy use, window and skylight design, and possible relations between them. Some of the key parameters include: (1) Energy end use data for lighting systems, classrooms, and schools; (2) Building design and operation including features for windows and daylighting; and (3) Other key parameters and information that would be available to investigate overall energy uses, building and systems design, their operation, and services provided.« less
Efficient hiding of confidential high-utility itemsets with minimal side effects
NASA Astrophysics Data System (ADS)
Lin, Jerry Chun-Wei; Hong, Tzung-Pei; Fournier-Viger, Philippe; Liu, Qiankun; Wong, Jia-Wei; Zhan, Justin
2017-11-01
Privacy preserving data mining (PPDM) is an emerging research problem that has become critical in the last decades. PPDM consists of hiding sensitive information to ensure that it cannot be discovered by data mining algorithms. Several PPDM algorithms have been developed. Most of them are designed for hiding sensitive frequent itemsets or association rules. Hiding sensitive information in a database can have several side effects such as hiding other non-sensitive information and introducing redundant information. Finding the set of itemsets or transactions to be sanitised that minimises side effects is an NP-hard problem. In this paper, a genetic algorithm (GA) using transaction deletion is designed to hide sensitive high-utility itemsets for PPUM. A flexible fitness function with three adjustable weights is used to evaluate the goodness of each chromosome for hiding sensitive high-utility itemsets. To speed up the evolution process, the pre-large concept is adopted in the designed algorithm. It reduces the number of database scans required for verifying the goodness of an evaluated chromosome. Substantial experiments are conducted to compare the performance of the designed GA approach (with/without the pre-large concept), with a GA-based approach relying on transaction insertion and a non-evolutionary algorithm, in terms of execution time, side effects, database integrity and utility integrity. Results demonstrate that the proposed algorithm hides sensitive high-utility itemsets with fewer side effects than previous studies, while preserving high database and utility integrity.
Cook, Jenny; McKevitt, Christopher
2018-01-01
Objective To investigate how different lay and professional groups perceive and understand the use of routinely collected general practice patient data for research, public health, service evaluation and commissioning. Design, method, participants and setting We conducted a multimethod, qualitative study. This entailed participant observation of the design and delivery of a series of deliberative engagement events about a local patient database made of routine primary care data. We also completed semistructured interviews with key professionals involved in the database. Qualitative data were thematically analysed. The research took place in an inner city borough in England. Results Of the community groups who participated in the six engagement events (111 individual citizens), five were health focused. It was difficult to recruit other types of organisations. Participants supported the uses of the database, but it was unclear how well they understood its scope and purpose. They had concerns about transparency, security and the potential misuse of data. Overall, they were more focused on the need for immediate investment in primary care capacity than data infrastructures to improve future health. The 10 interviewed professionals identified the purpose of the database in different ways, according to their interests. They emphasised the promise of the database as a resource in health research in its own right and in linking it to other datasets. Conclusions Findings demonstrate positivity to the uses of this local database, but a disconnect between the long-term purposes of the database and participants’ short-term priorities for healthcare quality. Varying understandings of the database and the potential for it to be used in multiple different ways in the future cement a need for systematic and routine public engagement to develop and maintain public awareness. Problems recruiting community groups signal a need to consider how we engage wider audiences more effectively. PMID:29317420
2004-04-01
To develop a large database on clinical presentation, treatment and prognosis of all clinical diagnosed severe acute respiratory syndrome (SARS) cases in Beijing during the 2003 "crisis", in order to conduct further clinical studies. The database was designed by specialists, under the organization of the Beijing Commanding Center for SARS Treatment and Cure, including 686 data items in six sub-databases: primary medical-care seeking, vital signs, common symptoms and signs, treatment, laboratory and auxiliary test, and cost. All hospitals having received SARS inpatients were involved in the project. Clinical data was transferred and coded by trained doctors and data entry was carried out by trained nurses, according to a uniformed protocol. A series of procedures had been taken before the database was finally established which included programmed logic checking, digit-by-digit check on 5% random sample, data linkage for transferred cases, coding of characterized information, database structure standardization, case reviewe by computer program according to SARS Clinical Diagnosis Criteria issued by the Ministry of Health, and exclusion of unqualified patients. The database involved 2148 probable SARS cases in accordant with the clinical diagnosis criteria, including 1291 with complete records. All cases and record-complete cases showed an almost identical distribution in sex, age, occupation, residence areas and time of onset. The completion rate of data was not significantly different between the two groups except for some items on primary medical-care seeking. Specifically, the data completion rate was 73% - 100% in primary medical-care seeking, 90% in common symptoms and signs, 100% for treatment, 98% for temperature, 90% for pulse, 100% for outcomes and 98% for costs in hospital. The number of cases collected in the Beijing Clinical Database of SARS Patients was fairly complete. Cases with complete records showed that they could serve as excellent representatives of all cases. The completeness of data was quite satisfactory with primary clinical items which allowed for further clinical studies.
A UML Profile for Developing Databases that Conform to the Third Manifesto
NASA Astrophysics Data System (ADS)
Eessaar, Erki
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.
Rollover Data Special Study : Final Report.
DOT National Transportation Integrated Search
2011-01-31
This report summarizes research results from the Rollover Data Special Study (RODSS) project. The research encompassed the : design of a RODSS database for the National Highway Traffic Safety Administration, review of the RODSS data to evaluate the :...
NASA Technical Reports Server (NTRS)
Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei
2011-01-01
This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.
Optics Toolbox: An Intelligent Relational Database System For Optical Designers
NASA Astrophysics Data System (ADS)
Weller, Scott W.; Hopkins, Robert E.
1986-12-01
Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO < 2.0 and FOV of 28 degrees ± 5% Again for the purpose of this discussion, we will define an expert system as a program which contains expert knowledge, can ask intelligent questions, and can form conclusions based on the answers given and the knowledge which it contains. Most expert systems store this knowledge in the form of rules-of-thumb, which are written in an English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert designer as he interacted with a customer for the first time: asking the right questions, forming conclusions, and making suggestions. With these objectives in mind, we have developed the Optics Toolbox. Optics Toolbox is actually two programs in one: it is a powerful relational database system with twenty-one search parameters, four search modes, and multi-database support, as well as a first-order optical design expert system with a rule interpreter which has full access to the relational database. The system schematic is shown in Figure 1.
Promise and Limitations of Big Data Research in Plastic Surgery.
Zhu, Victor Zhang; Tuggle, Charles Thompson; Au, Alexander Francis
2016-04-01
The use of "Big Data" in plastic surgery outcomes research has increased dramatically in the last 5 years. This article addresses some of the benefits and limitations of such research. This is a narrative review of large database studies in plastic surgery. There are several benefits to database research as compared with traditional forms of research, such as randomized controlled studies and cohort studies. These include the ease in patient recruitment, reduction in selection bias, and increased generalizability. As such, the types of outcomes research that are particularly suited for database studies include determination of geographic variations in practice, volume outcome analysis, evaluation of how sociodemographic factors affect access to health care, and trend analyses over time. The limitations of database research include data which are limited only to what was captured in the database, high power which can cause clinically insignificant differences to achieve statistical significance, and fishing which can lead to increased type I errors. The National Surgical Quality Improvement Project is an important general surgery database that may be useful for plastic surgeons because it is validated and has a large number of patients after over a decade of collecting data. The Tracking Operations and Outcomes for Plastic Surgeons Program is a newer database specific to plastic surgery. Databases are a powerful tool for plastic surgery outcomes research. It is critically important to understand their benefits and limitations when designing research projects or interpreting studies whose data have been drawn from them. For plastic surgeons, National Surgical Quality Improvement Project has a greater number of publications, but Tracking Operations and Outcomes for Plastic Surgeons Program is the most applicable database for plastic surgery research.
The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics
NASA Astrophysics Data System (ADS)
Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying
Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.
Designing Reliable Cohorts of Cardiac Patients across MIMIC and eICU
Chronaki, Catherine; Shahin, Abdullah; Mark, Roger
2016-01-01
The design of the patient cohort is an essential and fundamental part of any clinical patient study. Knowledge of the Electronic Health Records, underlying Database Management System, and the relevant clinical workflows are central to an effective cohort design. However, with technical, semantic, and organizational interoperability limitations, the database queries associated with a patient cohort may need to be reconfigured in every participating site. i2b2 and SHRINE advance the notion of patient cohorts as first class objects to be shared, aggregated, and recruited for research purposes across clinical sites. This paper reports on initial efforts to assess the integration of Medical Information Mart for Intensive Care (MIMIC) and Philips eICU, two large-scale anonymized intensive care unit (ICU) databases, using standard terminologies, i.e. LOINC, ICD9-CM and SNOMED-CT. Focus of this work is lab and microbiology observations and key demographics for patients with a primary cardiovascular ICD9-CM diagnosis. Results and discussion reflecting on reference core terminology standards, offer insights on efforts to combine detailed intensive care data from multiple ICUs worldwide. PMID:27774488
The Design of Lexical Database for Indonesian Language
NASA Astrophysics Data System (ADS)
Gunawan, D.; Amalia, A.
2017-03-01
Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.
Generation of an Aerothermal Data Base for the X33 Spacecraft
NASA Technical Reports Server (NTRS)
Roberts, Cathy; Huynh, Loc
1998-01-01
The X-33 experimental program is a cooperative program between industry and NASA, managed by Lockheed-Martin Skunk Works to develop an experimental vehicle to demonstrate new technologies for a single-stage-to-orbit, fully reusable launch vehicle (RLV). One of the new technologies to be demonstrated is an advanced Thermal Protection System (TPS) being designed by BF Goodrich (formerly Rohr, Inc.) with support from NASA. The calculation of an aerothermal database is crucial to identifying the critical design environment data for the TPS. The NASA Ames X-33 team has generated such a database using Computational Fluid Dynamics (CFD) analyses, engineering analysis methods and various programs to compare and interpolate the results from the CFD and the engineering analyses. This database, along with a program used to query the database, is used extensively by several X-33 team members to help them in designing the X-33. This paper will describe the methods used to generate this database, the program used to query the database, and will show some of the aerothermal analysis results for the X-33 aircraft.
A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets
ERIC Educational Resources Information Center
Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.
2013-01-01
This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…
Automated database design from natural language input
NASA Technical Reports Server (NTRS)
Gomez, Fernando; Segami, Carlos; Delaune, Carl
1995-01-01
Users and programmers of small systems typically do not have the skills needed to design a database schema from an English description of a problem. This paper describes a system that automatically designs databases for such small applications from English descriptions provided by end-users. Although the system has been motivated by the space applications at Kennedy Space Center, and portions of it have been designed with that idea in mind, it can be applied to different situations. The system consists of two major components: a natural language understander and a problem-solver. The paper describes briefly the knowledge representation structures constructed by the natural language understander, and, then, explains the problem-solver in detail.
SM-TF: A structural database of small molecule-transcription factor complexes.
Xu, Xianjin; Ma, Zhiwei; Sun, Hongmin; Zou, Xiaoqin
2016-06-30
Transcription factors (TFs) are the proteins involved in the transcription process, ensuring the correct expression of specific genes. Numerous diseases arise from the dysfunction of specific TFs. In fact, over 30 TFs have been identified as therapeutic targets of about 9% of the approved drugs. In this study, we created a structural database of small molecule-transcription factor (SM-TF) complexes, available online at http://zoulab.dalton.missouri.edu/SM-TF. The 3D structures of the co-bound small molecule and the corresponding binding sites on TFs are provided in the database, serving as a valuable resource to assist structure-based drug design related to TFs. Currently, the SM-TF database contains 934 entries covering 176 TFs from a variety of species. The database is further classified into several subsets by species and organisms. The entries in the SM-TF database are linked to the UniProt database and other sequence-based TF databases. Furthermore, the druggable TFs from human and the corresponding approved drugs are linked to the DrugBank. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate by meta-analysis the effect of experimental design on the production response functions obtained when changing crude protein levels in lactating dairy cow diets. The final database of studies that met the selection criteria contained 55 publications with 2...
Interventions for family members caring for an elder with dementia.
Acton, Gayle J; Winter, Mary A
2002-01-01
This chapter reviews 73 published and unpublished research reports of interventions for family members caring for an elder with dementia by nurse researchers and researchers from other disciplines. Reports were identified through searches of MEDLINE, CINAHL, Social Science Index, PsycINFO, ERIC, Social Work Abstracts, American Association of Retired Persons database, CRISP index of the National Institutes of Health, Cochrane Center database, and Dissertation Abstracts using the following search terms: caregiver, caregiving, dementia, Alzheimer's, intervention study, evaluation study, experimental, and quasi-experimental design. Additional keywords were used to narrow or expand the search as necessary. All nursing research was included in the review and nonnursing research was included if published between 1991 and 2001. Studies were included if they used a design that included a treatment and control group or a one-group, pretest-posttest design (ex post facto designs were included if they used a comparison group). Key findings show that approximately 32% of the study outcomes (e.g., burden, depression, knowledge) were changed after intervention in the desired direction. In addition, several problematic issues were identified including small, diverse samples; lack of intervention specificity; diversity in the length, duration, and intensity of the intervention strategies; and problematic outcome measures.
Short Fiction on Film: A Relational DataBase.
ERIC Educational Resources Information Center
May, Charles
Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…
[Design and development of an online system of parasite's images for training and evaluation].
Yuan-Chun, Mao; Sui, Xu; Jie, Wang; Hua-Yun, Zhou; Jun, Cao
2017-08-08
To design and develop an online training and evaluation system for parasitic pathogen recognition. The system was based on a Parasitic Diseases Specimen Image Digitization Construction Database by using MYSQL 5.0 as the system of database development software, and PHP 5 as the interface development language. It was mainly used for online training and evaluation of parasitic pathology diagnostic techniques. The system interface was designed simple, flexible, and easy to operate for medical staff. It enabled full day and 24 hours accessible to online training study and evaluation. Thus, the system broke the time and space constraints of the traditional training models. The system provides a shared platform for the professional training of parasitic diseases, and a reference for other training tasks.
Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata
Granato, Gregory E.; Tessler, Steven
2001-01-01
A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. The stratified metadatabase design for the NDAMS program is presented in the MS Access file DBDESIGN.mdb and documented with a data dictionary in the NDAMS_DD.mdb file recorded on the CD-ROM. The data dictionary file includes complete documentation of the table names, table descriptions, and information about each of the 419 fields in the database.
NASA Astrophysics Data System (ADS)
Zhou, Hui
It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.
Pallivalappila, Abdul Rouf; Stewart, Derek; Shetty, Ashalatha; Pande, Binita; McLay, James S.
2013-01-01
Aims. To undertake a systematic review of the recent (2008–2013) primary literature, describing views and experiences of CAM use during pregnancy by women and healthcare professionals. Method. Medline, Cumulative Index to Nursing and Allied Health Literature, Cochrane Database of Systematic Review Library and Allied, and Complementary Medicine Database were searched. Studies reporting systemic CAM products (homeopathic preparations, herbal medicines, Vitamins and minerals, homeopathy, and special diets) alone or in combination with other nonsystemic CAM modalities (e.g., acupuncture) were included. Results. Database searches retrieved 2,549 citations. Removal of duplicates followed by review of titles and abstracts yielded 32 relevant studies. Twenty-two reported the perspectives of women and their CAM use during pregnancy, while 10 focused on healthcare professionals. The majority of studies had significant flaws in study design and reporting, including a lack of appropriate definitions of CAM and associated modalities, absence of detailed checklists provided to participants, the use of convenience sampling, and a general lack of scientific robustness in terms of data validity and reliability. Conclusion. To permit generalisability of study findings, there is an urgent need to expand the evidence base assessing CAMs use during pregnancy using appropriately designed studies. PMID:24194778
The Application and Future of Big Database Studies in Cardiology: A Single-Center Experience.
Lee, Kuang-Tso; Hour, Ai-Ling; Shia, Ben-Chang; Chu, Pao-Hsien
2017-11-01
As medical research techniques and quality have improved, it is apparent that cardiovascular problems could be better resolved by more strict experiment design. In fact, substantial time and resources should be expended to fulfill the requirements of high quality studies. Many worthy ideas and hypotheses were unable to be verified or proven due to ethical or economic limitations. In recent years, new and various applications and uses of databases have received increasing attention. Important information regarding certain issues such as rare cardiovascular diseases, women's heart health, post-marketing analysis of different medications, or a combination of clinical and regional cardiac features could be obtained by the use of rigorous statistical methods. However, there are limitations that exist among all databases. One of the key essentials to creating and correctly addressing this research is through reliable processes of analyzing and interpreting these cardiologic databases.
Maritime Situational Awareness Research Infrastructure (MSARI): Requirements and High Level Design
2013-03-01
Exchange Model (NIEM)-Maritime [16], • Rapid Environmental Assessment (REA) database [17], • 2009 United States AIS Database 3, • PASTA -MARE project...upper/lower cases, plural, etc.) is very consistent and is pertinent for MSARI. The 2009 United States AIS and PASTA -MARE project databases, exclusively...designed for AIS, were found too restrictive for MSARI where other types of data are stored. How- ever, some lessons learned of the PASTA -MARE
Development of a medical module for disaster information systems.
Calik, Elif; Atilla, Rıdvan; Kaya, Hilal; Aribaş, Alirıza; Cengiz, Hakan; Dicle, Oğuz
2014-01-01
This study aims to improve a medical module which provides a real-time medical information flow about pre-hospital processes that gives health care in disasters; transferring, storing and processing the records that are in electronic media and over internet as a part of disaster information systems. In this study which is handled within the frame of providing information flow among professionals in a disaster case, to supply the coordination of healthcare team and transferring complete information to specified people at real time, Microsoft Access database and SQL query language were used to inform database applications. System was prepared on Microsoft .Net platform using C# language. Disaster information system-medical module was designed to be used in disaster area, field hospital, nearby hospitals, temporary inhabiting areas like tent city, vehicles that are used for dispatch, and providing information flow between medical officials and data centres. For fast recording of the disaster victim data, accessing to database which was used by health care professionals was provided (or granted) among analysing process steps and creating minimal datasets. Database fields were created in the manner of giving opportunity to enter new data and search old data which is recorded before disaster. Web application which provides access such as data entry to the database and searching towards the designed interfaces according to the login credentials access level. In this study, homepage and users' interfaces which were built on database in consequence of system analyses were provided with www.afmedinfo.com web site to the user access. With this study, a recommendation was made about how to use disaster-based information systems in the field of health. Awareness has been developed about the fact that disaster information system should not be perceived only as an early warning system. Contents and the differences of the health care practices of disaster information systems were revealed. A web application was developed supplying a link between the user and the database to make date entry and data query practices by the help of the developed interfaces.
MRNIDX - Marine Data Index: Database Description, Operation, Retrieval, and Display
Paskevich, Valerie F.
1982-01-01
A database referencing the location and content of data stored on magnetic medium was designed to assist in the indexing of time-series and spatially dependent marine geophysical data collected or processed by the U. S. Geological Survey. The database was designed and created for input to the Geologic Retrieval and Synopsis Program (GRASP) to allow selective retrievals of information pertaining to location of data, data format, cruise, geographical bounds and collection dates of data. This information is then used to locate the stored data for administrative purposes or further processing. Database utilization is divided into three distinct operations. The first is the inventorying of the data and the updating of the database, the second is the retrieval of information from the database, and the third is the graphic display of the geographical boundaries to which the retrieved information pertains.
Protocol for developing a Database of Zoonotic disease Research in India (DoZooRI).
Chatterjee, Pranab; Bhaumik, Soumyadeep; Chauhan, Abhimanyu Singh; Kakkar, Manish
2017-12-10
Zoonotic and emerging infectious diseases (EIDs) represent a public health threat that has been acknowledged only recently although they have been on the rise for the past several decades. On an average, every year since the Second World War, one pathogen has emerged or re-emerged on a global scale. Low/middle-income countries such as India bear a significant burden of zoonotic and EIDs. We propose that the creation of a database of published, peer-reviewed research will open up avenues for evidence-based policymaking for targeted prevention and control of zoonoses. A large-scale systematic mapping of the published peer-reviewed research conducted in India will be undertaken. All published research will be included in the database, without any prejudice for quality screening, to broaden the scope of included studies. Structured search strategies will be developed for priority zoonotic diseases (leptospirosis, rabies, anthrax, brucellosis, cysticercosis, salmonellosis, bovine tuberculosis, Japanese encephalitis and rickettsial infections), and multiple databases will be searched for studies conducted in India. The database will be managed and hosted on a cloud-based platform called Rayyan. Individual studies will be tagged based on key preidentified parameters (disease, study design, study type, location, randomisation status and interventions, host involvement and others, as applicable). The database will incorporate already published studies, obviating the need for additional ethical clearances. The database will be made available online, and in collaboration with multisectoral teams, domains of enquiries will be identified and subsequent research questions will be raised. The database will be queried for these and resulting evidence will be analysed and published in peer-reviewed journals. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hartung, Daniel M; Zarin, Deborah A; Guise, Jeanne-Marie; McDonagh, Marian; Paynter, Robin; Helfand, Mark
2014-04-01
ClinicalTrials.gov requires reporting of result summaries for many drug and device trials. To evaluate the consistency of reporting of trials that are registered in the ClinicalTrials.gov results database and published in the literature. ClinicalTrials.gov results database and matched publications identified through ClinicalTrials.gov and a manual search of 2 electronic databases. 10% random sample of phase 3 or 4 trials with results in the ClinicalTrials.gov results database, completed before 1 January 2009, with 2 or more groups. One reviewer extracted data about trial design and results from the results database and matching publications. A subsample was independently verified. Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in ClinicalTrials.gov, 28% differed from the matched publication. Small sample that included earliest results posted to the database. Reporting discrepancies between the ClinicalTrials.gov results database and matching publications are common. Which source contains the more accurate account of results is unclear, although ClinicalTrials.gov may provide a more comprehensive description of adverse events than the publication. Agency for Healthcare Research and Quality.
A manufacturing database of advanced materials used in spacecraft structures
NASA Technical Reports Server (NTRS)
Bao, Han P.
1994-01-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved. The most likely application for MCF is in cost modeling for trade studies. On-going work is being pursued to expand the potential applications of MCF.
A manufacturing database of advanced materials used in spacecraft structures
NASA Astrophysics Data System (ADS)
Bao, Han P.
1994-12-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved.
Schema Versioning for Multitemporal Relational Databases.
ERIC Educational Resources Information Center
De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita
1997-01-01
Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
Emission Database for Global Atmospheric Research (EDGAR).
ERIC Educational Resources Information Center
Olivier, J. G. J.; And Others
1994-01-01
Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)
First Database Course--Keeping It All Organized
ERIC Educational Resources Information Center
Baugh, Jeanne M.
2015-01-01
All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…
76 FR 56657 - Unlicensed Operation in the TV Broadcast Bands
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
... Second Report and Order the Commission decided to designate one or more database administrator from the private sector to create and operate TV band databases. The TV band database administrators will act on behalf of the FCC, but will offer a privately owned and operated service. Each database administrator...
Roche, Nicolas; Reddel, Helen; Martin, Richard; Brusselle, Guy; Papi, Alberto; Thomas, Mike; Postma, Dirjke; Thomas, Vicky; Rand, Cynthia; Chisholm, Alison; Price, David
2014-02-01
Real-world research can use observational or clinical trial designs, in both cases putting emphasis on high external validity, to complement the classical efficacy randomized controlled trials (RCTs) with high internal validity. Real-world research is made necessary by the variety of factors that can play an important a role in modulating effectiveness in real life but are often tightly controlled in RCTs, such as comorbidities and concomitant treatments, adherence, inhalation technique, access to care, strength of doctor-caregiver communication, and socio-economic and other organizational factors. Real-world studies belong to two main categories: pragmatic trials and observational studies, which can be prospective or retrospective. Focusing on comparative database observational studies, the process aimed at ensuring high-quality research can be divided into three parts: preparation of research, analyses and reporting, and discussion of results. Key points include a priori planning of data collection and analyses, identification of appropriate database(s), proper outcomes definition, study registration with commitment to publish, bias minimization through matching and adjustment processes accounting for potential confounders, and sensitivity analyses testing the robustness of results. When these conditions are met, observational database studies can reach a sufficient level of evidence to help create guidelines (i.e., clinical and regulatory decision-making).
Applying the vantage PDMS to jack-up drilling ships
NASA Astrophysics Data System (ADS)
Yin, Peng; Chen, Yuan-Ming; Cui, Tong-Kai; Wang, Zi-Shen; Gong, Li-Jiang; Yu, Xiang-Fen
2009-09-01
The plant design management system (PDMS) is an integrated application which includes a database and is useful when designing complex 3-D industrial projects. It could be used to simplify the most difficult part of a subsea oil extraction project—detailed pipeline design. It could also be used to integrate the design of equipment, structures, HVAC, E-ways as well as the detailed designs of other specialists. This article mainly examines the applicability of the Vantage PDMS database to pipeline projects involving jack-up drilling ships. It discusses the catalogue (CATA) of the pipeline, the spec-world (SPWL) of the pipeline, the bolt tables (BLTA) and so on. This article explains the main methods for CATA construction as well as problem in the process of construction. In this article, the authors point out matters needing attention when using the Vantage PDMS database in the design process and discuss partial solutions to these questions.
Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela
2015-03-01
Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.
NASA Astrophysics Data System (ADS)
Ehlmann, Bryon K.
Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.
ERIC Educational Resources Information Center
Funk, Mathias; van Diggelen, Migchiel
2017-01-01
In this paper, the authors describe how a study of a large database of written university teacher feedback in the department of Industrial Design led to the development of a new conceptual framework for feedback and the design of a new feedback tool. This paper focuses on the translation of related work in the area of feedback mechanisms for…
First year progress report on the development of the Texas flexible pavement database.
DOT National Transportation Integrated Search
2008-01-01
Comprehensive and reliable databases are essential for the development, validation, and calibration of any pavement : design and rehabilitation system. These databases should include material properties, pavement structural : characteristics, highway...
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)
NASA Astrophysics Data System (ADS)
Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul
2016-09-01
Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.
Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.
CEBS: a comprehensive annotated database of toxicological data
Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer
2017-01-01
The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660
Integration of NASA/GSFC and USGS Rock Magnetic Databases.
NASA Astrophysics Data System (ADS)
Nazarova, K. A.; Glen, J. M.
2004-05-01
A global Magnetic Petrology Database (MPDB) was developed and continues to be updated at NASA/Goddard Space Flight Center. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via the Internet for a more realistic interpretation of satellite (as well as aeromagnetic and ground) lithospheric magnetic anomalies. The MPDB contains data on rocks from localities around the world (about 19,000 samples) including the Ukranian and Baltic Shields, Kamchatka, Iceland, Urals Mountains, etc. The MPDB is designed, managed and presented on the web as a research oriented database. Several database applications have been specifically developed for data manipulation and analysis of the MPDB. The geophysics unit at the USGS in Menlo Park has over 17,000 rock-property data, largely from sites within the western U.S. This database contains rock-density and rock-magnetic parameters collected for use in gravity and magnetic field modeling, and paleomagnetic studies. Most of these data were taken from surface outcrops and together they span a broad range of rock types. Measurements were made either in-situ at the outcrop, or in the laboratory on hand samples and paleomagnetic cores acquired in the field. The USGS and NASA/GSFC data will be integrated as part of an effort to provide public access to a single, uniformly maintained database. Due to the large number of data and the very large area sampled, the database can yield rock-property statistics on a broad range of rock types; it is thus applicable to study areas beyond the geographic scope of the database. The intent of this effort is to provide incentive for others to further contribute to the database, and a tool with which the geophysical community can entertain studies formerly precluded.
Design and implementation of the cacao genome database
USDA-ARS?s Scientific Manuscript database
The Cacao Genome Database (CGD, www.cacaogenomedb.org) is being developed to provide a comprehensive data mining resource of genomic, genetic and breeding data for Theobroma cacao. Designed using Chado and a collection of Drupal modules, known as Tripal, CGD currently contains the genetically anchor...
Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases
Dinu, Valentin; Nadkarni, Prakash
2007-01-01
Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467
Park, Jeongbin; Bae, Sangsu
2018-03-15
Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.
Game-Based Learning in Science Education: A Review of Relevant Research
ERIC Educational Resources Information Center
Li, Ming-Chaun; Tsai, Chin-Chung
2013-01-01
The purpose of this study is to review empirical research articles regarding game-based science learning (GBSL) published from 2000 to 2011. Thirty-one articles were identified through the Web of Science and SCOPUS databases. A qualitative content analysis technique was adopted to analyze the research purposes and designs, game design and…
The Biological Macromolecule Crystallization Database and NASA Protein Crystal Growth Archive
Gilliland, Gary L.; Tung, Michael; Ladner, Jane
1996-01-01
The NIST/NASA/CARB Biological Macromolecule Crystallization Database (BMCD), NIST Standard Reference Database 21, contains crystal data and crystallization conditions for biological macromolecules. The database entries include data abstracted from published crystallographic reports. Each entry consists of information describing the biological macromolecule crystallized and crystal data and the crystallization conditions for each crystal form. The BMCD serves as the NASA Protein Crystal Growth Archive in that it contains protocols and results of crystallization experiments undertaken in microgravity (space). These database entries report the results, whether successful or not, from NASA-sponsored protein crystal growth experiments in microgravity and from microgravity crystallization studies sponsored by other international organizations. The BMCD was designed as a tool to assist x-ray crystallographers in the development of protocols to crystallize biological macromolecules, those that have previously been crystallized, and those that have not been crystallized. PMID:11542472
Evaluating Land-Atmosphere Interactions with the North American Soil Moisture Database
NASA Astrophysics Data System (ADS)
Giles, S. M.; Quiring, S. M.; Ford, T.; Chavez, N.; Galvan, J.
2015-12-01
The North American Soil Moisture Database (NASMD) is a high-quality observational soil moisture database that was developed to study land-atmosphere interactions. It includes over 1,800 monitoring stations the United States, Canada and Mexico. Soil moisture data are collected from multiple sources, quality controlled and integrated into an online database (soilmoisture.tamu.edu). The period of record varies substantially and only a few of these stations have an observation record extending back into the 1990s. Daily soil moisture observations have been quality controlled using the North American Soil Moisture Database QAQC algorithm. The database is designed to facilitate observationally-driven investigations of land-atmosphere interactions, validation of the accuracy of soil moisture simulations in global land surface models, satellite calibration/validation for SMOS and SMAP, and an improved understanding of how soil moisture influences climate on seasonal to interannual timescales. This paper provides some examples of how the NASMD has been utilized to enhance understanding of land-atmosphere interactions in the U.S. Great Plains.
A Database as a Service for the Healthcare System to Store Physiological Signal Data.
Chang, Hsien-Tsung; Lin, Tsai-Huei
2016-01-01
Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.
A Database as a Service for the Healthcare System to Store Physiological Signal Data
Lin, Tsai-Huei
2016-01-01
Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415
Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S
2005-06-01
The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA
Schwach, Frank; Bushell, Ellen; Gomes, Ana Rita; Anar, Burcu; Girling, Gareth; Herd, Colin; Rayner, Julian C; Billker, Oliver
2015-01-01
The Plasmodium Genetic Modification (PlasmoGEM) database (http://plasmogem.sanger.ac.uk) provides access to a resource of modular, versatile and adaptable vectors for genome modification of Plasmodium spp. parasites. PlasmoGEM currently consists of >2000 plasmids designed to modify the genome of Plasmodium berghei, a malaria parasite of rodents, which can be requested by non-profit research organisations free of charge. PlasmoGEM vectors are designed with long homology arms for efficient genome integration and carry gene specific barcodes to identify individual mutants. They can be used for a wide array of applications, including protein localisation, gene interaction studies and high-throughput genetic screens. The vector production pipeline is supported by a custom software suite that automates both the vector design process and quality control by full-length sequencing of the finished vectors. The PlasmoGEM web interface allows users to search a database of finished knock-out and gene tagging vectors, view details of their designs, download vector sequence in different formats and view available quality control data as well as suggested genotyping strategies. We also make gDNA library clones and intermediate vectors available for researchers to produce vectors for themselves. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
NVST Data Archiving System Based On FastBit NoSQL Database
NASA Astrophysics Data System (ADS)
Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo
2014-06-01
The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less
National Transportation Atlas Databases : 1995
DOT National Transportation Integrated Search
1995-01-01
BTS has compiled the initial version of a geographic atlas : database to support research, analysis, and decision making : across all modes of transportation. The atlas databases are : designed primarily to meet the needs of DOT at the national : lev...
A DBMS-based medical teleconferencing system.
Chun, J; Kim, H; Lee, S; Choi, J; Cho, H
2001-01-01
This article presents the design of a medical teleconferencing system that is integrated with a multimedia patient database and incorporates easy-to-use tools and functions to effectively support collaborative work between physicians in remote locations. The design provides a virtual workspace that allows physicians to collectively view various kinds of patient data. By integrating the teleconferencing function into this workspace, physicians are able to conduct conferences using the same interface and have real-time access to the database during conference sessions. The authors have implemented a prototype based on this design. The prototype uses a high-speed network test bed and a manually created substitute for the integrated patient database.
A DBMS-based Medical Teleconferencing System
Chun, Jonghoon; Kim, Hanjoon; Lee, Sang-goo; Choi, Jinwook; Cho, Hanik
2001-01-01
This article presents the design of a medical teleconferencing system that is integrated with a multimedia patient database and incorporates easy-to-use tools and functions to effectively support collaborative work between physicians in remote locations. The design provides a virtual workspace that allows physicians to collectively view various kinds of patient data. By integrating the teleconferencing function into this workspace, physicians are able to conduct conferences using the same interface and have real-time access to the database during conference sessions. The authors have implemented a prototype based on this design. The prototype uses a high-speed network test bed and a manually created substitute for the integrated patient database. PMID:11522766
Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.
2011-01-01
Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479
Design and implementation of an audit trail in compliance with US regulations.
Jiang, Keyuan; Cao, Xiang
2011-10-01
Audit trails have been used widely to ensure quality of study data and have been implemented in computerized clinical trials data systems. Increasingly, there is a need to audit access to study participant identifiable information to provide assurance that study participant privacy is protected and confidentiality is maintained. In the United States, several federal regulations specify how the audit trail function should be implemented. To describe the development and implementation of a comprehensive audit trail system that meets the regulatory requirements of assuring data quality and integrity and protecting participant privacy and that is also easy to implement and maintain. The audit trail system was designed and developed after we examined regulatory requirements, data access methods, prevailing application architecture, and good security practices. Our comprehensive audit trail system was developed and implemented at the database level using a commercially available database management software product. It captures both data access and data changes with the correct user identifier. Documentation of access is initiated automatically in response to either data retrieval or data change at the database level. Currently, our system has been implemented only on one commercial database management system. Although our audit trail algorithm does not allow for logging aggregate operations, aggregation does not reveal sensitive private participant information. Careful consideration must be given to data items selected for monitoring because selection of all data items using our system can dramatically increase the requirements for computer disk space. Evaluating the criticality and sensitivity of individual data items selected can control the storage requirements for clinical trial audit trail records. Our audit trail system is capable of logging data access and data change operations to satisfy regulatory requirements. Our approach is applicable to virtually any data that can be stored in a relational database.
Food Composition Database Format and Structure: A User Focused Approach
Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine
2015-01-01
This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836
NASA Astrophysics Data System (ADS)
Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian
2017-09-01
SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.
Design of special purpose database for credit cooperation bank business processing network system
NASA Astrophysics Data System (ADS)
Yu, Yongling; Zong, Sisheng; Shi, Jinfa
2011-12-01
With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.
Carroll, Robert; Ramagopalan, Sreeram V.; Cid-Ruzafa, Javier; Lambrelli, Dimitra; McDonald, Laura
2017-01-01
Background: The objective of this study was to investigate the study design characteristics of Post-Authorisation Studies (PAS) requested by the European Medicines Agency which were recorded on the European Union (EU) PAS Register held by the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP). Methods: We undertook a cross-sectional descriptive analysis of all studies registered on the EU PAS Register as of 18 th October 2016. Results: We identified a total of 314 studies on the EU PAS Register, including 81 (26%) finalised, 160 (51%) ongoing and 73 (23%) planned. Of those studies identified, 205 (65%) included risk assessment in their scope, 133 (42%) included drug utilisation and 94 (30%) included effectiveness evaluation. Just over half of the studies (175; 56%) used primary data capture, 135 (43%) used secondary data and 4 (1%) used a hybrid design combining both approaches. Risk assessment and effectiveness studies were more likely to use primary data capture (60% and 85% respectively as compared to 39% and 14% respectively for secondary). The converse was true for drug utilisation studies where 59% were secondary vs. 39% for primary. For type 2 diabetes mellitus, database studies were more commonly used (80% vs 3% chart review, 3% hybrid and 13% primary data capture study designs) whereas for studies in oncology, primary data capture were more likely to be used (85% vs 4% chart review, and 11% database study designs). Conclusions: Results of this analysis show that PAS design varies according to study objectives and therapeutic area. PMID:29188016
Scientific Communication of Geochemical Data and the Use of Computer Databases.
ERIC Educational Resources Information Center
Le Bas, M. J.; Durham, J.
1989-01-01
Describes a scheme in the United Kingdom that coordinates geochemistry publications with a computerized geochemistry database. The database comprises not only data published in the journals but also the remainder of the pertinent data set. The discussion covers the database design; collection, storage and retrieval of data; and plans for future…
76 FR 25344 - Information Collection(s) Being Reviewed by the Federal Communications Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... Second Report and Order the Commission decided to designate one or more database administrators from the private sector to create and operate TV bands databases. The TV band database administrators will act on behalf of the FCC, but will offer a privately owned and operated service. Each database administrator...
Lund, Jennifer L.; Richardson, David B.; Stürmer, Til
2016-01-01
Better understanding of biases related to selective prescribing of, and adherence to, preventive treatments has led to improvements in the design and analysis of pharmacoepidemiologic studies. One influential development has been the “active comparator, new user” study design, which seeks to emulate the design of a head-to-head randomized controlled trial. In this review, we first discuss biases that may affect pharmacoepidemiologic studies and describe their direction and magnitude in a variety of settings. We then present the historical foundations of the active comparator, new user study design and explain how this design conceptually mitigates biases leading to a paradigm shift in pharmacoepidemiology. We offer practical guidance on the implementation of the study design using administrative databases. Finally, we provide an empirical example in which the active comparator, new user study design addresses biases that have previously impeded pharmacoepidemiologic studies. PMID:26954351
SwePep, a database designed for endogenous peptides and mass spectrometry.
Fälth, Maria; Sköld, Karl; Norrman, Mathias; Svensson, Marcus; Fenyö, David; Andren, Per E
2006-06-01
A new database, SwePep, specifically designed for endogenous peptides, has been constructed to significantly speed up the identification process from complex tissue samples utilizing mass spectrometry. In the identification process the experimental peptide masses are compared with the peptide masses stored in the database both with and without possible post-translational modifications. This intermediate identification step is fast and singles out peptides that are potential endogenous peptides and can later be confirmed with tandem mass spectrometry data. Successful applications of this methodology are presented. The SwePep database is a relational database developed using MySql and Java. The database contains 4180 annotated endogenous peptides from different tissues originating from 394 different species as well as 50 novel peptides from brain tissue identified in our laboratory. Information about the peptides, including mass, isoelectric point, sequence, and precursor protein, is also stored in the database. This new approach holds great potential for removing the bottleneck that occurs during the identification process in the field of peptidomics. The SwePep database is available to the public.
Yonker, V A; Young, K P; Beecham, S K; Horwitz, S; Cousin, K
1990-01-01
This study was designed to make a comparative evaluation of the performance of MEDLINE in covering serial literature. Forensic medicine was chosen because it is an interdisciplinary subject area that would test MEDLARS at the periphery of the system. The evaluation of database coverage was based upon articles included in the bibliographies of scholars in the field of forensic medicine. This method was considered appropriate for characterizing work used by researchers in this field. The results of comparing MEDLINE to other databases evoked some concerns about the selective indexing policy of MEDLINE in serving the interests of those working in forensic medicine. PMID:2403829
Developing a Cyberinfrastructure for integrated assessments of environmental contaminants.
Kaur, Taranjit; Singh, Jatinder; Goodale, Wing M; Kramar, David; Nelson, Peter
2005-03-01
The objective of this study was to design and implement prototype software for capturing field data and automating the process for reporting and analyzing the distribution of mercury. The four phase process used to design, develop, deploy and evaluate the prototype software is described. Two different development strategies were used: (1) design of a mobile data collection application intended to capture field data in a meaningful format and automate transfer into user databases, followed by (2) a re-engineering of the original software to develop an integrated database environment with improved methods for aggregating and sharing data. Results demonstrated that innovative use of commercially available hardware and software components can lead to the development of an end-to-end digital cyberinfrastructure that captures, records, stores, transmits, compiles and integrates multi-source data as it relates to mercury.
Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Observational Studies.
Snyder, Graham M; Young, Heather; Varman, Meera; Milstone, Aaron M; Harris, Anthony D; Munoz-Price, Silvia
2016-10-01
Observational studies compare outcomes among subjects with and without an exposure of interest, without intervention from study investigators. Observational studies can be designed as a prospective or retrospective cohort study or as a case-control study. In healthcare epidemiology, these observational studies often take advantage of existing healthcare databases, making them more cost-effective than clinical trials and allowing analyses of rare outcomes. This paper addresses the importance of selecting a well-defined study population, highlights key considerations for study design, and offers potential solutions including biostatistical tools that are applicable to observational study designs. Infect Control Hosp Epidemiol 2016;1-6.
Hu, Jingwen; Lee, Jong B.; Yang, King H.; King, Albert I.
2005-01-01
The objective of this study was to investigate the main injury patterns and sources of non-ejected occupants (i.e. no full/partial ejection) during trip-over crashes, using the NASS-CDS database. Specific injury types and sources of the head, chest, and neck were identified. Results from this study suggest that cerebrum injuries, especially subarachnoid hemorrhage, rib fractures, lung injuries, and cervical spine fractures need to be emphasized if cadaveric tests or numerical simulations are designed to study rollover injury mechanisms. The roof has been identified as the major source for head and neck injuries. However, changing the roof design alone is not likely to improve rollover safety. Instead, the belt restraint systems, passive airbags, roof structure, and new innovations need to be considered in a systematic manner to provide enhanced rollover occupant protection. PMID:16179144
The International Outer Planets Watch atmospheres node database of giant-planet images
NASA Astrophysics Data System (ADS)
Hueso, R.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Gómez-Forrellad, J. M.
2011-10-01
The Atmospheres Node of the International Outer Planets Watch (IOPW) is aimed to encourage the observations and study of the atmospheres of the Giant Planets. One of its main activities is to provide an interaction between the professional and amateur astronomical communities maintaining an online and fully searchable database of images of the giant planets obtained from amateur astronomers and available to both professional and amateurs [1]. The IOPW database contains about 13,000 image observations of Jupiter and Saturn obtained in the visible range with a few contributions of Uranus and Neptune. We describe the organization and structure of the database as posted in the Internet and in particular the PVOL software (Planetary Virtual Observatory & Laboratory) designed to manage the site and based in concepts from Virtual Observatory projects.
Nichols, A W
2008-11-01
To identify sports medicine-related clinical trial research articles in the PubMed MEDLINE database published between 1996 and 2005 and conduct a review and analysis of topics of research, experimental designs, journals of publication and the internationality of authorships. Sports medicine research is international in scope with improving study methodology and an evolution of topics. Structured review of articles identified in a search of a large electronic medical database. PubMed MEDLINE database. Sports medicine-related clinical research trials published between 1996 and 2005. Review and analysis of articles that meet inclusion criteria. Articles were examined for study topics, research methods, experimental subject characteristics, journal of publication, lead authors and journal countries of origin and language of publication. The search retrieved 414 articles, of which 379 (345 English language and 34 non-English language) met the inclusion criteria. The number of publications increased steadily during the study period. Randomised clinical trials were the most common study type and the "diagnosis, management and treatment of sports-related injuries and conditions" was the most popular study topic. The knee, ankle/foot and shoulder were the most frequent anatomical sites of study. Soccer players and runners were the favourite study subjects. The American Journal of Sports Medicine had the highest number of publications and shared the greatest international diversity of authorships with the British Journal of Sports Medicine. The USA, Australia, Germany and the UK produced a good number of the lead authorships. In all, 91% of articles and 88% of journals were published in English. Sports medicine-related research is internationally diverse, clinical trial publications are increasing and the sophistication of research design may be improving.
Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn
2015-01-01
Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
Developing a national strategy to prevent dementia: Leon Thal Symposium 2009.
Khachaturian, Zaven S; Barnes, Deborah; Einstein, Richard; Johnson, Sterling; Lee, Virginia; Roses, Allen; Sager, Mark A; Shankle, William R; Snyder, Peter J; Petersen, Ronald C; Schellenberg, Gerard; Trojanowski, John; Aisen, Paul; Albert, Marilyn S; Breitner, John C S; Buckholtz, Neil; Carrillo, Maria; Ferris, Steven; Greenberg, Barry D; Grundman, Michael; Khachaturian, Ara S; Kuller, Lewis H; Lopez, Oscar L; Maruff, Paul; Mohs, Richard C; Morrison-Bogorad, Marcelle; Phelps, Creighton; Reiman, Eric; Sabbagh, Marwan; Sano, Mary; Schneider, Lon S; Siemers, Eric; Tariot, Pierre; Touchon, Jacques; Vellas, Bruno; Bain, Lisa J
2010-03-01
Among the major impediments to the design of clinical trials for the prevention of Alzheimer's disease (AD), the most critical is the lack of validated biomarkers, assessment tools, and algorithms that would facilitate identification of asymptomatic individuals with elevated risk who might be recruited as study volunteers. Thus, the Leon Thal Symposium 2009 (LTS'09), on October 27-28, 2009 in Las Vegas, Nevada, was convened to explore strategies to surmount the barriers in designing a multisite, comparative study to evaluate and validate various approaches for detecting and selecting asymptomatic people at risk for cognitive disorders/dementia. The deliberations of LTS'09 included presentations and reviews of different approaches (algorithms, biomarkers, or measures) for identifying asymptomatic individuals at elevated risk for AD who would be candidates for longitudinal or prevention studies. The key nested recommendations of LTS'09 included: (1) establishment of a National Database for Longitudinal Studies as a shared research core resource; (2) launch of a large collaborative study that will compare multiple screening approaches and biomarkers to determine the best method for identifying asymptomatic people at risk for AD; (3) initiation of a Global Database that extends the concept of the National Database for Longitudinal Studies for longitudinal studies beyond the United States; and (4) development of an educational campaign that will address public misconceptions about AD and promote healthy brain aging. 2010. Published by Elsevier Inc.
An integrated database-pipeline system for studying single nucleotide polymorphisms and diseases.
Yang, Jin Ok; Hwang, Sohyun; Oh, Jeongsu; Bhak, Jong; Sohn, Tae-Kwon
2008-12-12
Studies on the relationship between disease and genetic variations such as single nucleotide polymorphisms (SNPs) are important. Genetic variations can cause disease by influencing important biological regulation processes. Despite the needs for analyzing SNP and disease correlation, most existing databases provide information only on functional variants at specific locations on the genome, or deal with only a few genes associated with disease. There is no combined resource to widely support gene-, SNP-, and disease-related information, and to capture relationships among such data. Therefore, we developed an integrated database-pipeline system for studying SNPs and diseases. To implement the pipeline system for the integrated database, we first unified complicated and redundant disease terms and gene names using the Unified Medical Language System (UMLS) for classification and noun modification, and the HUGO Gene Nomenclature Committee (HGNC) and NCBI gene databases. Next, we collected and integrated representative databases for three categories of information. For genes and proteins, we examined the NCBI mRNA, UniProt, UCSC Table Track and MitoDat databases. For genetic variants we used the dbSNP, JSNP, ALFRED, and HGVbase databases. For disease, we employed OMIM, GAD, and HGMD databases. The database-pipeline system provides a disease thesaurus, including genes and SNPs associated with disease. The search results for these categories are available on the web page http://diseasome.kobic.re.kr/, and a genome browser is also available to highlight findings, as well as to permit the convenient review of potentially deleterious SNPs among genes strongly associated with specific diseases and clinical phenotypes. Our system is designed to capture the relationships between SNPs associated with disease and disease-causing genes. The integrated database-pipeline provides a list of candidate genes and SNP markers for evaluation in both epidemiological and molecular biological approaches to diseases-gene association studies. Furthermore, researchers then can decide semi-automatically the data set for association studies while considering the relationships between genetic variation and diseases. The database can also be economical for disease-association studies, as well as to facilitate an understanding of the processes which cause disease. Currently, the database contains 14,674 SNP records and 109,715 gene records associated with human diseases and it is updated at regular intervals.
Relation between experimental and non-experimental study designs. HB vaccines: a case study
Jefferson, T.; Demicheli, V.
1999-01-01
STUDY OBJECTIVE: To examine the relation between experimental and non- experimental study design in vaccinology. DESIGN: Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. SETTING: Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. RESULTS: Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size. PMID:10326054
Gilderthorp, Rosanna C
2015-03-01
This study aimed to critically review all studies that have set out to evaluate the use of eye movement desensitization and reprocessing (EMDR) for people diagnosed with both intellectual disability (ID) and post-traumatic stress disorder (PTSD). Searches of the online databases Psych Info, The Cochrane Database of Systematic Reviews, The Cochrane Database of Randomized Control Trials, CINAHL, ASSIA and Medline were conducted. Five studies are described and evaluated. Key positive points include the high clinical salience of the studies and their high external validity. Several common methodological criticisms are highlighted, however, including difficulty in the definition of the terms ID and PTSD, lack of control in design and a lack of consideration of ethical implications. Overall, the articles reviewed indicate cause for cautious optimism about the utility of EMDR with this population. The clinical and research implications of this review are discussed. © The Author(s) 2014.
DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh
2014-10-01
The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.
One approach to design of speech emotion database
NASA Astrophysics Data System (ADS)
Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav
2016-05-01
This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.
Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design.
ERIC Educational Resources Information Center
Stephenson, Christie
1999-01-01
Examines the Museum Educational Site Licensing (MESL) Project--a cooperative project between seven cultural heritage repositories and seven universities--as well as other developments of cultural heritage image databases for academic use. Reviews recent literature on image indexing and retrieval, interface design, and tool development, urging a…
Research on Design Information Management System for Leather Goods
NASA Astrophysics Data System (ADS)
Lu, Lei; Peng, Wen-li
The idea of setting up a design information management system of leather goods was put forward to solve the problems existed in current information management of leather goods. Working principles of the design information management system for leather goods were analyzed in detail. Firstly, the acquiring approach of design information of leather goods was introduced. Secondly, the processing methods of design information were introduced. Thirdly, the management of design information in database was studied. Finally, the application of the system was discussed by taking the shoes products as an example.
NASA Technical Reports Server (NTRS)
Bebout, Leslie; Keller, R.; Miller, S.; Jahnke, L.; DeVincenzi, D. (Technical Monitor)
2002-01-01
The Ames Exobiology Culture Collection Database (AECC-DB) has been developed as a collaboration between microbial ecologists and information technology specialists. It allows for extensive web-based archiving of information regarding field samples to document microbial co-habitation of specific ecosystem micro-environments. Documentation and archiving continues as pure cultures are isolated, metabolic properties determined, and DNA extracted and sequenced. In this way metabolic properties and molecular sequences are clearly linked back to specific isolates and the location of those microbes in the ecosystem of origin. Use of this database system presents a significant advancement over traditional bookkeeping wherein there is generally little or no information regarding the environments from which microorganisms were isolated. Generally there is only a general ecosystem designation (i.e., hot-spring). However within each of these there are a myriad of microenvironments with very different properties and determining exactly where (which microenvironment) a given microbe comes from is critical in designing appropriate isolation media and interpreting physiological properties. We are currently using the database to aid in the isolation of a large number of cyanobacterial species and will present results by PI's and students demonstrating the utility of this new approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, K; Phillips, M; Fishburn, M
Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less
Development and evaluation of a study design typology for human research.
Carini, Simona; Pollock, Brad H; Lehmann, Harold P; Bakken, Suzanne; Barbour, Edward M; Gabriel, Davera; Hagler, Herbert K; Harper, Caryn R; Mollah, Shamim A; Nahm, Meredith; Nguyen, Hien H; Scheuermann, Richard H; Sim, Ida
2009-11-14
A systematic classification of study designs would be useful for researchers, systematic reviewers, readers, and research administrators, among others. As part of the Human Studies Database Project, we developed the Study Design Typology to standardize the classification of study designs in human research. We then performed a multiple observer masked evaluation of active research protocols in four institutions according to a standardized protocol. Thirty-five protocols were classified by three reviewers each into one of nine high-level study designs for interventional and observational research (e.g., N-of-1, Parallel Group, Case Crossover). Rater classification agreement was moderately high for the 35 protocols (Fleiss' kappa = 0.442) and higher still for the 23 quantitative studies (Fleiss' kappa = 0.463). We conclude that our typology shows initial promise for reliably distinguishing study design types for quantitative human research.
A Model Based Mars Climate Database for the Mission Design
NASA Technical Reports Server (NTRS)
2005-01-01
A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access
Multicenter neonatal databases: Trends in research uses.
Creel, Liza M; Gregory, Sean; McNeal, Catherine J; Beeram, Madhava R; Krauss, David R
2017-01-13
In the US, approximately 12.7% of all live births are preterm, 8.2% of live births were low birth weight (LBW), and 1.5% are very low birth weight (VLBW). Although technological advances have improved mortality rates among preterm and LBW infants, improving overall rates of prematurity and LBW remains a national priority. Monitoring short- and long-term outcomes is critical for advancing medical treatment and minimizing morbidities associated with prematurity or LBW; however, studying these infants can be challenging. Several large, multi-center neonatal databases have been developed to improve research and quality improvement of treatments for and outcomes of premature and LBW infants. The purpose of this systematic review was to describe three multi-center neonatal databases. We conducted a literature search using PubMed and Google Scholar over the period 1990 to August 2014. Studies were included in our review if one of the databases was used as a primary source of data or comparison. Included studies were categorized by year of publication; study design employed, and research focus. A total of 343 studies published between 1991 and 2014 were included. Studies of premature and LBW infants using these databases have increased over time, and provide evidence for both neonatology and community-based pediatric practice. Research into treatment and outcomes of premature and LBW infants is expanding, partially due to the availability of large, multicenter databases. The consistency of clinical conditions and neonatal outcomes studied since 1990 demonstrates that there are dedicated research agendas and resources that allow for long-term, and potentially replicable, studies within this population.
ERIC Educational Resources Information Center
Klemperer, Katharina; And Others
1989-01-01
Each of three articles describes an academic library's online catalog that includes locally created databases. Topics covered include database and software selection; systems design and development; database producer negotiations; problems encountered during implementation; database loading; training and documentation; and future plans. (CLB)
Electron Effective-Attenuation-Length Database
National Institute of Standards and Technology Data Gateway
SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge) This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).
Strong, Vivian E.; Selby, Luke V.; Sovel, Mindy; Disa, Joseph J.; Hoskins, William; DeMatteo, Ronald; Scardino, Peter; Jaques, David P.
2015-01-01
Background Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Study Design Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Results 1,498 of 4,284 operations during the 3rd quarter of 2008 were audited. 79% (N=1,180) of the operations did not have a secondary event while 21% (N=318) of operations had an identified event. 91% (1,365) of operations were correctly entered into the SSE database. 97% (129/133) of missed secondary events were Grades I and II. Three Grade III (2%) and one Grade IV (1%) secondary event were missed. There were no missed Grade 5 secondary events. Conclusion Grade III – IV events are more accurately collected than Grade I – II events. Robust and accurate secondary events data can be collected by clinicians and research staff and these data can safely be used for quality improvement projects and research. PMID:25319579
Expert system development for commonality analysis in space programs
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1987-01-01
This report is a combination of foundational mathematics and software design. A mathematical model of the Commonality Analysis problem was developed and some important properties discovered. The complexity of the problem is described herein and techniques, both deterministic and heuristic, for reducing that complexity are presented. Weaknesses are pointed out in the existing software (System Commonality Analysis Tool) and several improvements are recommended. It is recommended that: (1) an expert system for guiding the design of new databases be developed; (2) a distributed knowledge base be created and maintained for the purpose of encoding the commonality relationships between design items in commonality databases; (3) a software module be produced which automatically generates commonality alternative sets from commonality databases using the knowledge associated with those databases; and (4) a more complete commonality analysis module be written which is capable of generating any type of feasible solution.
NASA Astrophysics Data System (ADS)
Huang, Duruo; Du, Wenqi; Zhu, Hong
2017-10-01
In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
Implementation and Evaluation of Microcomputer Systems for the Republic of Turkey’s Naval Ships.
1986-03-01
important database design tool for both logical and physical database design, such as flowcharts or pseudocodes are used for program design. Logical...string manipulation in FORTRAN is difficult but not impossible. BASIC ( Beginners All-Purpose Symbolic Instruction Code): Basic is currently the most...63 APPENDIX B GLOSSARY/ACRONYM LIST AC Alternating Current AP Application Program BASIC Beginners All-purpose Symbolic Instruction Code CCP
The Design and Evaluation of a Front-End User Interface for Energy Researchers.
ERIC Educational Resources Information Center
Borgman, Christine L.; And Others
1989-01-01
Reports on the Online Access to Knowledge (OAK) Project, which developed software to support end user access to a Department of Energy database based on the skill levels and needs of energy researchers. The discussion covers issues in development, evaluation, and the study of user behavior in designing an interface tailored to a special…
ERIC Educational Resources Information Center
Lucas, Paul M.
2009-01-01
This study utilized a mixed-method design in order to investigate the alignment of secondary science teachers' instructional methodologies and their homework designs. Surveys were distributed to educators from a Center for Ocean Sciences Excellence Education (COSEE) database. Coding rubrics were developed to categorize the participants' responses…
ERIC Educational Resources Information Center
Crawford, April D.; Zucker, Tricia A.; Williams, Jeffrey M.; Bhavsar, Vibhuti; Landry, Susan H.
2013-01-01
Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based…
American Association of University Women: Branch Operations Data Modeling Case
ERIC Educational Resources Information Center
Harris, Ranida B.; Wedel, Thomas L.
2015-01-01
A nationally prominent woman's advocacy organization is featured in this case study. The scenario may be used as a teaching case, an assignment, or a project in systems analysis and design as well as database design classes. Students are required to document the system operations and requirements, apply logical data modeling concepts, and design…
Very large database of lipids: rationale and design.
Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R
2013-11-01
Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.
Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.
2015-01-01
Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521
Zhang, Jie; Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.
Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683
GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.
Subhash, Santhilal; Kanduri, Chandrasekhar
2016-09-13
High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.
The effect of wild card designations and rare alleles in forensic DNA database searches.
Tvedebrink, Torben; Bright, Jo-Anne; Buckleton, John S; Curran, James M; Morling, Niels
2015-05-01
Forensic DNA databases are powerful tools used for the identification of persons of interest in criminal investigations. Typically, they consist of two parts: (1) a database containing DNA profiles of known individuals and (2) a database of DNA profiles associated with crime scenes. The risk of adventitious or chance matches between crimes and innocent people increases as the number of profiles within a database grows and more data is shared between various forensic DNA databases, e.g. from different jurisdictions. The DNA profiles obtained from crime scenes are often partial because crime samples may be compromised in quantity or quality. When an individual's profile cannot be resolved from a DNA mixture, ambiguity is introduced. A wild card, F, may be used in place of an allele that has dropped out or when an ambiguous profile is resolved from a DNA mixture. Variant alleles that do not correspond to any marker in the allelic ladder or appear above or below the extent of the allelic ladder range are assigned the allele designation R for rare allele. R alleles are position specific with respect to the observed/unambiguous allele. The F and R designations are made when the exact genotype has not been determined. The F and R designation are treated as wild cards for searching, which results in increased chance of adventitious matches. We investigated the probability of adventitious matches given these two types of wild cards. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Relation between experimental and non-experimental study designs. HB vaccines: a case study.
Jefferson, T; Demicheli, V
1999-01-01
To examine the relation between experimental and non-experimental study design in vaccinology. Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size.
Mungall, Christopher J; Emmert, David B
2007-07-01
A few years ago, FlyBase undertook to design a new database schema to store Drosophila data. It would fully integrate genomic sequence and annotation data with bibliographic, genetic, phenotypic and molecular data from the literature representing a distillation of the first 100 years of research on this major animal model system. In developing this new integrated schema, FlyBase also made a commitment to ensure that its design was generic, extensible and available as open source, so that it could be employed as the core schema of any model organism data repository, thereby avoiding redundant software development and potentially increasing interoperability. Our question was whether we could create a relational database schema that would be successfully reused. Chado is a relational database schema now being used to manage biological knowledge for a wide variety of organisms, from human to pathogens, especially the classes of information that directly or indirectly can be associated with genome sequences or the primary RNA and protein products encoded by a genome. Biological databases that conform to this schema can interoperate with one another, and with application software from the Generic Model Organism Database (GMOD) toolkit. Chado is distinctive because its design is driven by ontologies. The use of ontologies (or controlled vocabularies) is ubiquitous across the schema, as they are used as a means of typing entities. The Chado schema is partitioned into integrated subschemas (modules), each encapsulating a different biological domain, and each described using representations in appropriate ontologies. To illustrate this methodology, we describe here the Chado modules used for describing genomic sequences. GMOD is a collaboration of several model organism database groups, including FlyBase, to develop a set of open-source software for managing model organism data. The Chado schema is freely distributed under the terms of the Artistic License (http://www.opensource.org/licenses/artistic-license.php) from GMOD (www.gmod.org).
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Charting a Path to Location Intelligence for STD Control.
Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce
2009-01-01
This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Andrea Beth
2004-07-01
This is a case study of the NuMAC nuclear accountability system developed at a private fuel fabrication facility. This paper investigates nuclear material accountability and safeguards by researching expert knowledge applied in the system design and development. Presented is a system developed to detect and deter the theft of weapons grade nuclear material. Examined is the system architecture that includes: issues for the design and development of the system; stakeholder issues; how the system was built and evolved; software design, database design, and development tool considerations; security and computing ethics. (author)
Suchard, Marc A; Zorych, Ivan; Simpson, Shawn E; Schuemie, Martijn J; Ryan, Patrick B; Madigan, David
2013-10-01
The self-controlled case series (SCCS) offers potential as an statistical method for risk identification involving medical products from large-scale observational healthcare data. However, analytic design choices remain in encoding the longitudinal health records into the SCCS framework and its risk identification performance across real-world databases is unknown. To evaluate the performance of SCCS and its design choices as a tool for risk identification in observational healthcare data. We examined the risk identification performance of SCCS across five design choices using 399 drug-health outcome pairs in five real observational databases (four administrative claims and one electronic health records). In these databases, the pairs involve 165 positive controls and 234 negative controls. We also consider several synthetic databases with known relative risks between drug-outcome pairs. We evaluate risk identification performance through estimating the area under the receiver-operator characteristics curve (AUC) and bias and coverage probability in the synthetic examples. The SCCS achieves strong predictive performance. Twelve of the twenty health outcome-database scenarios return AUCs >0.75 across all drugs. Including all adverse events instead of just the first per patient and applying a multivariate adjustment for concomitant drug use are the most important design choices. However, the SCCS as applied here returns relative risk point-estimates biased towards the null value of 1 with low coverage probability. The SCCS recently extended to apply a multivariate adjustment for concomitant drug use offers promise as a statistical tool for risk identification in large-scale observational healthcare databases. Poor estimator calibration dampens enthusiasm, but on-going work should correct this short-coming.
Huber, Lara
2011-06-01
In the neurosciences digital databases more and more are becoming important tools of data rendering and distributing. This development is due to the growing impact of imaging based trial design in cognitive neuroscience, including morphological as much as functional imaging technologies. As the case of the 'Laboratory of Neuro Imaging' (LONI) is showing, databases are attributed a specific epistemological power: Since the 1990s databasing is seen to foster the integration of neuroscientific data, although local regimes of data production, -manipulation and--interpretation are also challenging this development. Databasing in the neurosciences goes along with the introduction of new structures of integrating local data, hence establishing digital spaces of knowledge (epistemic spaces): At this stage, inherent norms of digital databases are affecting regimes of imaging-based trial design, for example clinical research into Alzheimer's disease.
DockScreen: A database of in silico biomolecular interactions to support computational toxicology
We have developed DockScreen, a database of in silico biomolecular interactions designed to enable rational molecular toxicological insight within a computational toxicology framework. This database is composed of chemical/target (receptor and enzyme) binding scores calculated by...
NBIC: National Ballast Information Clearinghouse
Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database / Database Manager: Tami Huber Senior Analyst / Ecologist: Mark Minton Data Managers Ashley Arnwine Jessica Hardee Amanda Reynolds Database Design and Programming / Application Programming: Paul Winterbauer
Ethics across the computer science curriculum: privacy modules in an introductory database course.
Appel, Florence
2005-10-01
This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn
2015-01-01
Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127
A Database Design and Development Case: Home Theater Video
ERIC Educational Resources Information Center
Ballenger, Robert; Pratt, Renee
2012-01-01
This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…
Vapor Compression Cycle Design Program (CYCLE_D)
National Institute of Standards and Technology Data Gateway
SRD 49 NIST Vapor Compression Cycle Design Program (CYCLE_D) (PC database for purchase) The CYCLE_D database package simulates the vapor compression refrigeration cycles. It is fully compatible with REFPROP 9.0 and covers the 62 single-compound refrigerants . Fluids can be used in mixtures comprising up to five components.
Salemi, Jason L; Salinas-Miranda, Abraham A; Wilson, Roneé E; Salihu, Hamisu M
2015-01-01
Objective To describe the use of a clinically enhanced maternal and child health (MCH) database to strengthen community-engaged research activities, and to support the sustainability of data infrastructure initiatives. Data Sources/Study Setting Population-based, longitudinal database covering over 2.3 million mother–infant dyads during a 12-year period (1998–2009) in Florida. Setting: A community-based participatory research (CBPR) project in a socioeconomically disadvantaged community in central Tampa, Florida. Study Design Case study of the use of an enhanced state database for supporting CBPR activities. Principal Findings A federal data infrastructure award resulted in the creation of an MCH database in which over 92 percent of all birth certificate records for infants born between 1998 and 2009 were linked to maternal and infant hospital encounter-level data. The population-based, longitudinal database was used to supplement data collected from focus groups and community surveys with epidemiological and health care cost data on important MCH disparity issues in the target community. Data were used to facilitate a community-driven, decision-making process in which the most important priorities for intervention were identified. Conclusions Integrating statewide all-payer, hospital-based databases into CBPR can empower underserved communities with a reliable source of health data, and it can promote the sustainability of newly developed data systems. PMID:25879276
The UMD-p53 database: new mutations and analysis tools.
Béroud, Christophe; Soussi, Thierry
2003-03-01
The tumor suppressor gene TP53 (p53) is the most extensively studied gene involved in human cancers. More than 1,400 publications have reported mutations of this gene in 150 cancer types for a total of 14,971 mutations. To exploit this huge bulk of data, specific analytic tools were highly warranted. We therefore developed a locus-specific database software called UMD-p53. This database compiles all somatic and germline mutations as well as polymorphisms of the TP53 gene which have been reported in the published literature since 1989, or unpublished data submitted to the database curators. The database is available at www.umd.necker.fr or at http://p53.curie.fr/. In this paper, we describe recent developments of the UMD-p53 database. These developments include new fields and routines. For example, the analysis of putative acceptor or donor splice sites is now automated and gives new insight for the causal role of "silent mutations." Other routines have also been created such as the prescreening module, the UV module, and the cancer distribution module. These new improvements will help users not only for molecular epidemiology and pharmacogenetic studies but also for patient-based studies. To achieve theses purposes we have designed a procedure to check and validate data in order to reach the highest quality data. Copyright 2003 Wiley-Liss, Inc.
CAPR - Theresa Guerin | Center for Cancer Research
Theresa Guerin oversees animal colony management and provides support in breeding experimental animal cohort, preparing documentation for CAPR preclinical studies, as well as assistance in designing drug treatment plans. She also maintains multiple database resources. Expertise
Wind Data and Tools | Wind | NREL
integrated system design and analysis tools. All software is available for download. Wind-Wildlife Impacts database. It contains a collection of articles, reports, studies, and more that focus on the impacts that
Web based aphasia test using service oriented architecture (SOA)
NASA Astrophysics Data System (ADS)
Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.
2007-11-01
Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.
Analysis and preliminary design of Kunming land use and planning management information system
NASA Astrophysics Data System (ADS)
Li, Li; Chen, Zhenjie
2007-06-01
This article analyzes Kunming land use planning and management information system from the system building objectives and system building requirements aspects, nails down the system's users, functional requirements and construction requirements. On these bases, the three-tier system architecture based on C/S and B/S is defined: the user interface layer, the business logic layer and the data services layer. According to requirements for the construction of land use planning and management information database derived from standards of the Ministry of Land and Resources and the construction program of the Golden Land Project, this paper divides system databases into planning document database, planning implementation database, working map database and system maintenance database. In the design of the system interface, this paper uses various methods and data formats for data transmission and sharing between upper and lower levels. According to the system analysis results, main modules of the system are designed as follows: planning data management, the planning and annual plan preparation and control function, day-to-day planning management, planning revision management, decision-making support, thematic inquiry statistics, planning public participation and so on; besides that, the system realization technologies are discussed from the system operation mode, development platform and other aspects.
Reply to Comment by Briere and Elliott.
ERIC Educational Resources Information Center
Nash, Michael R.; And Others
1993-01-01
Nash et al. respond to Briere and Elliott's (this issue) comments regarding their study (this issue) on effects of controlling for family environment when studying sexual abuse sequelae. Cites limitations of Briere and Elliott's survey study database. Agrees with Briere and Elliott in call for longitudinal, multimethod designs for examining…
The CEBAF Element Database and Related Operational Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larrieu, Theodore; Slominski, Christopher; Keesee, Marie
The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.
The European general thoracic surgery database project.
Falcoz, Pierre Emmanuel; Brunelli, Alessandro
2014-05-01
The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Design and development of a geo-referenced database to radionuclides in food
NASA Astrophysics Data System (ADS)
Nascimento, L. M. E.; Ferreira, A. C. M.; Gonzalez, S. A.
2018-03-01
The primary purpose of the range of activities concerning the info management of the environmental assessment is to provide to scientific community an improved access to environmental data, as well as to support the decision making loop, in case of contamination events due either to accidental or intentional causes. In recent years, geotechnologies became a key reference in environmental research and monitoring, since they deliver an efficient data retrieval and subsequent processing about natural resources. This study aimed at the development of a georeferenced database (SIGLARA – SIstema Georeferenciado Latino Americano de Radionuclídeos em Alimentos), designed to radioactivity in food data storage, available in three languages (Spanish, Portuguese and English), employing free software[l].
2000-03-01
languages yet still be able to access the legacy relational databases that businesses have huge investments in. JDBC is a low-level API designed for...consider the return of investment . The system requirements, discussed in Chapter II, are the main source of input to developing the relational...1996. Inprise, Gatekeeper Guide, Inprise Corporation, 1999. Kroenke, D., Database Processing Fundementals , Design, and Implementation, Sixth Edition
Open-access evidence database of controlled trials and systematic reviews in youth mental health.
De Silva, Stefanie; Bailey, Alan P; Parker, Alexandra G; Montague, Alice E; Hetrick, Sarah E
2018-06-01
To present an update to an evidence-mapping project that consolidates the evidence base of interventions in youth mental health. To promote dissemination of this resource, the evidence map has been translated into a free online database (https://orygen.org.au/Campus/Expert-Network/Evidence-Finder or https://headspace.org.au/research-database/). Included studies are extensively indexed to facilitate searching. A systematic search for prevention and treatment studies in young people (mean age 6-25 years) is conducted annually using Embase, MEDLINE, PsycINFO and the Cochrane Library. Included studies are restricted to controlled trials and systematic reviews published since 1980. To date, 221 866 publications have been screened, of which 2680 have been included in the database. Updates are conducted annually. This shared resource can be utilized to substantially reduce the amount of time involved with conducting literature searches. It is designed to promote the uptake of evidence-based practice and facilitate research to address gaps in youth mental health. © 2017 John Wiley & Sons Australia, Ltd.
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Teaching Data Base Search Strategies.
ERIC Educational Resources Information Center
Hannah, Larry
1987-01-01
Discusses database searching as a method for developing thinking skills, and describes an activity suitable for fifth grade through high school using a president's and vice president's database. Teaching methods are presented, including student team activities, and worksheets designed for the AppleWorks database are included. (LRW)
NASA Technical Reports Server (NTRS)
Brenton, J. C.; Barbre, R. E.; Decker, R. K.; Orcutt, J. M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) provides atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large datasets consists of ensuring erroneous data are removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, development methodologies, and periods of record. The goal of this activity is to use the previous efforts to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, It is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2014-04-01
The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital's clinical governance was required to create a database. Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-04-01
The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital's clinical governance was required to create a database. Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems.
Jetton, Jennifer G; Guillet, Ronnie; Askenazi, David J; Dill, Lynn; Jacobs, Judd; Kent, Alison L; Selewski, David T; Abitbol, Carolyn L; Kaskel, Fredrick J; Mhanna, Maroun J; Ambalavanan, Namasivayam; Charlton, Jennifer R
2016-01-01
Acute kidney injury (AKI) affects ~30% of hospitalized neonates. Critical to advancing our understanding of neonatal AKI is collaborative research among neonatologists and nephrologists. The Neonatal Kidney Collaborative (NKC) is an international, multidisciplinary group dedicated to investigating neonatal AKI. The AWAKEN study (Assessment of Worldwide Acute Kidney injury Epidemiology in Neonates) was designed to describe the epidemiology of neonatal AKI, validate the definition of neonatal AKI, identify primary risk factors for neonatal AKI, and investigate the contribution of fluid management to AKI events and short-term outcomes. The NKC was established with at least one pediatric nephrologist and neonatologist from 24 institutions in 4 countries (USA, Canada, Australia, and India). A Steering Committee and four subcommittees were created. The database subcommittee oversaw the development of the web-based database (MediData Rave™) that captured all NICU admissions from 1/1/14 to 3/31/14. Inclusion and exclusion criteria were applied to eliminate neonates with a low likelihood of AKI. Data collection included: (1) baseline demographic information; (2) daily physiologic parameters and care received during the first week of life; (3) weekly "snapshots"; (4) discharge information including growth parameters, final diagnoses, discharge medications, and need for renal replacement therapy; and (5) all serum creatinine values. AWAKEN was proposed as human subjects research. The study design allowed for a waiver of informed consent/parental permission. NKC investigators will disseminate data through peer-reviewed publications and educational conferences. The purpose of this publication is to describe the formation of the NKC, the establishment of the AWAKEN cohort and database, future directions, and a few "lessons learned." The AWAKEN database includes ~325 unique variables and >4 million discrete data points. AWAKEN will be the largest, most inclusive neonatal AKI study to date. In addition to validating the neonatal AKI definition and identifying risk factors for AKI, this study will uncover variations in practice patterns related to fluid provision, renal function monitoring, and involvement of pediatric nephrologists during hospitalization. The AWAKEN study will position the NKC to achieve the long-term goal of improving the lives, health, and well-being of newborns at risk for kidney disease.
South American foF2 database using genetic algorithms
NASA Astrophysics Data System (ADS)
Gularte, Erika; Bilitza, Dieter; Carpintero, Daniel; Jaen, Juliana
2016-07-01
We present the first step towards a new database of the ionospheric parameter foF2 for the South American region. The foF2 parameter, being the maximum of the ionospheric electronic density profile and its main sculptor, is of great interest not only in atmospheric studies but also in the realm of radio propagation. Due to its importance, its large variability and the difficulty to model it in time and space, it was the subject of an intense study since decades ago. The current databases, used by the IRI (International Reference Ionosphere) model, and based on Fourier expansions, has been built in the 60s from the available ionosondes at that time; therefore, it is still short of South American data. The main goal of this work is to upgrade the database, incorporating the now available data compiled by the RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, Argentine Network for the Study of the Upper Atmosphere) network. Also, we developed an algorithm to study the foF2 variability, based on the modern technique of genetic algorithms, which has been successfully applied on other disciplines. One of the main advantages of this technique is its ability in working with many variables and with unfavorable samples. The results are compared with the IRI databases, and improvements to the latter are suggested. Finally, it is important to notice that the new database is designed so that new available data can be easily incorporated.
The design of moral education website for college students based on ASP.NET
NASA Astrophysics Data System (ADS)
Sui, Chunling; Du, Ruiqing
2012-01-01
Moral education website offers an available solution to low transmission speed and small influence areas of traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of this paper. Development tools were introduced. The system design was illustrated with module design and database design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the discussions in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Ongoing or planned hydro research, results of recent studies, and reviews of new books, publications, and software. Items covered this month include: (1) a recommendation that dam designers give more consideration to earthquake resistance, (2) the development of a new wave rotor design, (3) the development of a small hydro database in China, and (4) an ICOLD bulletin on the optimization of constuction costs.
Bode, Stefan; Murawski, Carsten; Laham, Simon M.
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/. PMID:29364985
Pattern database applications from design to manufacturing
NASA Astrophysics Data System (ADS)
Zhuang, Linda; Zhu, Annie; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh
2017-03-01
Pattern-based approaches are becoming more common and popular as the industry moves to advanced technology nodes. At the beginning of a new technology node, a library of process weak point patterns for physical and electrical verification are starting to build up and used to prevent known hotspots from re-occurring on new designs. Then the pattern set is expanded to create test keys for process development in order to verify the manufacturing capability and precheck new tape-out designs for any potential yield detractors. With the database growing, the adoption of pattern-based approaches has expanded from design flows to technology development and then needed for mass-production purposes. This paper will present the complete downstream working flows of a design pattern database(PDB). This pattern-based data analysis flow covers different applications across different functional teams from generating enhancement kits to improving design manufacturability, populating new testing design data based on previous-learning, generating analysis data to improve mass-production efficiency and manufacturing equipment in-line control to check machine status consistency across different fab sites.
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
ERIC Educational Resources Information Center
Al-Azawei, Ahmed; Serenelli, Fabio; Lundqvist, Karsten
2016-01-01
The Universal Design for Learning (UDL) framework is increasingly drawing the attention of researchers and educators as an effective solution for filling the gap between learner ability and individual differences. This study aims to analyse the content of twelve papers, where the UDL was adopted. The articles were chosen from several databases and…
Towards the design of novel cuprate-based superconductors
NASA Astrophysics Data System (ADS)
Yee, Chuck-Hou
The rapid maturation of materials databases combined with recent development of theories seeking to quantitatively link chemical properties to superconductivity in the cuprates provide the context to design novel superconductors. In this talk, we describe a framework designed to search for new superconductors, which combines chemical rules-of-thumb, insights of transition temperatures from dynamical mean-field theory, first-principles electronic structure tools, materials databases and structure prediction via evolutionary algorithms. We apply the framework to design a family of copper oxysulfides and evaluate the prospects of superconductivity.
NASA Astrophysics Data System (ADS)
Wang, Lusheng; Yang, Yong; Lin, Guohui
Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.
Ontology to relational database transformation for web application development and maintenance
NASA Astrophysics Data System (ADS)
Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful
2018-03-01
Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.
Database Software for the 1990s.
ERIC Educational Resources Information Center
Beiser, Karl
1990-01-01
Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…
Some Reliability Issues in Very Large Databases.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1988-01-01
Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa
The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.
Content Independence in Multimedia Databases.
ERIC Educational Resources Information Center
de Vries, Arjen P.
2001-01-01
Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…
The Chemical Aquatic Fate and Effects (CAFE) database, developed by NOAA’s Emergency Response Division (ERD), is a centralized data repository that allows for unrestricted access to fate and effects data. While this database was originally designed to help support decisions...
Design, Development, and Maintenance of the GLOBE Program Website and Database
NASA Technical Reports Server (NTRS)
Brummer, Renate; Matsumoto, Clifford
2004-01-01
This is a 1-year (Fy 03) proposal to design and develop enhancements, implement improved efficiency and reliability, and provide responsive maintenance for the operational GLOBE (Global Learning and Observations to Benefit the Environment) Program website and database. This proposal is renewable, with a 5% annual inflation factor providing an approximate cost for the out years.
ERIC Educational Resources Information Center
Lansdale, Mark W.; Oliff, Lynda; Baguley, Thom S.
2005-01-01
The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is…
Database Design Learning: A Project-Based Approach Organized through a Course Management System
ERIC Educational Resources Information Center
Dominguez, Cesar; Jaime, Arturo
2010-01-01
This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…
Singh, Vinay Kumar; Ambwani, Sonu; Marla, Soma; Kumar, Anil
2009-10-23
We describe the development of a user friendly tool that would assist in the retrieval of information relating to Cry genes in transgenic crops. The tool also helps in detection of transformed Cry genes from Bacillus thuringiensis present in transgenic plants by providing suitable designed primers for PCR identification of these genes. The tool designed based on relational database model enables easy retrieval of information from the database with simple user queries. The tool also enables users to access related information about Cry genes present in various databases by interacting with different sources (nucleotide sequences, protein sequence, sequence comparison tools, published literature, conserved domains, evolutionary and structural data). http://insilicogenomics.in/Cry-btIdentifier/welcome.html.
BIRS - Bioterrorism Information Retrieval System.
Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar
2013-01-01
Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. The database is freely available at http://www.bioterrorism.biowaves.org.
Generation of comprehensive thoracic oncology database--tool for translational research.
Surati, Mosmi; Robinson, Matthew; Nandi, Suvobroto; Faoro, Leonardo; Demchuk, Carley; Kanteti, Rajani; Ferguson, Benjamin; Gangadhar, Tara; Hensing, Thomas; Hasina, Rifat; Husain, Aliya; Ferguson, Mark; Karrison, Theodore; Salgia, Ravi
2011-01-22
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
ERIC Educational Resources Information Center
Melloy, Patricia G.
2015-01-01
A two-part laboratory exercise was developed to enhance classroom instruction on the significance of p53 mutations in cancer development. Students were asked to mine key information from an international database of p53 genetic changes related to cancer, the IARC TP53 database. Using this database, students designed several data mining activities…
Higher Education for Sustainable Development: A Systematic Review
ERIC Educational Resources Information Center
Wu, Yen-Chun Jim; Shen, Ju-Peng
2016-01-01
Purpose: This study aims to provide a complete understanding of academic research into higher education for sustainable development (HESD). Design/methodology/approach: This study utilizes a systematic review of four scientific literature databases to outline topics of research during the UN's Decade of Education for Sustainable Development…
Sermon, Jan; Geerts, Paul; Denee, Tom R.; De Vos, Cedric; Malfait, Bart; Lamotte, Mark; Mulder, Cornelis L.
2017-01-01
Achieving greater continuation of treatment is a key element to improve treatment outcomes in schizophrenia patients. However, reported treatment continuation can differ markedly depending on the study design. In a retrospective setting, treatment continuation remains overall poor among patients using antipsychotics. This study aimed to document the difference in treatment continuation between four long-acting injectable antipsychotics based on the QuintilesIMS LRx databases, national, longitudinal, panel based prescription databases of retail pharmacies, in the Netherlands and Belgium. Paliperidone palmitate once monthly, risperidone microspheres, haloperidol decanoate, and olanzapine pamoate were studied. This study demonstrated significantly higher treatment continuation of paliperidone palmitate once monthly compared to risperidone microspheres (p-value<0,01) and haloperidol decanoate (p-value<0,01) in both countries, a significantly higher treatment continuation of paliperidone palmitate once monthly compared to olanzapine pamoate in the Netherlands (p-value<0,01), and a general trend towards better treatment continuation versus olanzapine pamoate in Belgium. Analysing the subgroup of patients without previous exposure to long-acting antipsychotic treatment revealed the positive impact of previous exposure on treatment continuation with a subsequent long acting treatment. Additionally, the probability of restarting the index therapy was higher among patients treated with paliperidone palmitate once monthly compared to patients treated with risperidone microspheres and haloperidol decanoate. The data source used and the methodology defined ensured for the first time a comparison of treatment continuation in a non-interventional study design for the four long-acting injectable antipsychotics studied. PMID:28614404
1984-12-01
52242 Prepared for the AIR FORCE OFFICE OF SCIENTIFIC RESEARCH Under Grant No. AFOSR 82-0322 December 1984 ~ " ’w Unclassified SECURITY CLASSIFICATION4...OF THIS PAGE REPORT DOCUMENTATION PAGE is REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified None 20 SECURITY CLASSIFICATION...designer .and computer- are 20 DIiRIBUTION/AVAILABI LIT Y 0P ABSTR4ACT 21 ABSTRACT SECURITY CLASSIFICA1ONr UNCLASSIFIED/UNLIMITED SAME AS APT OTIC USERS
Database-Guided Discovery of Potent Peptides to Combat HIV-1 or Superbugs
Wang, Guangshun
2013-01-01
Antimicrobial peptides (AMPs), small host defense proteins, are indispensable for the protection of multicellular organisms such as plants and animals from infection. The number of AMPs discovered per year increased steadily since the 1980s. Over 2,000 natural AMPs from bacteria, protozoa, fungi, plants, and animals have been registered into the antimicrobial peptide database (APD). The majority of these AMPs (>86%) possess 11–50 amino acids with a net charge from 0 to +7 and hydrophobic percentages between 31–70%. This article summarizes peptide discovery on the basis of the APD. The major methods are the linguistic model, database screening, de novo design, and template-based design. Using these methods, we identified various potent peptides against human immunodeficiency virus type 1 (HIV-1) or methicillin-resistant Staphylococcus aureus (MRSA). While the stepwise designed anti-HIV peptide is disulfide-linked and rich in arginines, the ab initio designed anti-MRSA peptide is linear and rich in leucines. Thus, there are different requirements for antiviral and antibacterial peptides, which could kill pathogens via different molecular targets. The biased amino acid composition in the database-designed peptides, or natural peptides such as θ-defensins, requires the use of the improved two-dimensional NMR method for structural determination to avoid the publication of misleading structure and dynamics. In the case of human cathelicidin LL-37, structural determination requires 3D NMR techniques. The high-quality structure of LL-37 provides a solid basis for understanding its interactions with membranes of bacteria and other pathogens. In conclusion, the APD database is a comprehensive platform for storing, classifying, searching, predicting, and designing potent peptides against pathogenic bacteria, viruses, fungi, parasites, and cancer cells. PMID:24276259
Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory
Green, J. Wayne
1991-01-01
This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
Enhancing Knowledge Integration: An Information System Capstone Project
ERIC Educational Resources Information Center
Steiger, David M.
2009-01-01
This database project focuses on learning through knowledge integration; i.e., sharing and applying specialized (database) knowledge within a group, and combining it with other business knowledge to create new knowledge. Specifically, the Tiny Tots, Inc. project described below requires students to design, build, and instantiate a database system…
Linking Multiple Databases: Term Project Using "Sentences" DBMS.
ERIC Educational Resources Information Center
King, Ronald S.; Rainwater, Stephen B.
This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…
Lee, Jennifer F.; Hesselberth, Jay R.; Meyers, Lauren Ancel; Ellington, Andrew D.
2004-01-01
The aptamer database is designed to contain comprehensive sequence information on aptamers and unnatural ribozymes that have been generated by in vitro selection methods. Such data are not normally collected in ‘natural’ sequence databases, such as GenBank. Besides serving as a storehouse of sequences that may have diagnostic or therapeutic utility, the database serves as a valuable resource for theoretical biologists who describe and explore fitness landscapes. The database is updated monthly and is publicly available at http://aptamer.icmb.utexas.edu/. PMID:14681367
Jaton, Florian
2017-01-01
This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802
StarView: The object oriented design of the ST DADS user interface
NASA Technical Reports Server (NTRS)
Williams, J. D.; Pollizzi, J. A.
1992-01-01
StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.
Do Apprentices' Communities of Practice Block Unwelcome Knowledge?
ERIC Educational Resources Information Center
Sligo, Frank; Tilley, Elspeth; Murray, Niki
2011-01-01
Purpose: This study aims to examine how well print-literacy support being provided to New Zealand Modern Apprentices (MAs) is supporting their study and practical work. Design/methodology/approach: The authors undertook a qualitative analysis of a database of 191 MAs in the literacy programme, then in 14 case studies completed 46 interviews with…
Epistemological Trends in Educational Leadership Studies in Israel: 2000-2012
ERIC Educational Resources Information Center
Eyal, Ori; Rom, Noa
2015-01-01
Purpose: The purpose of this paper is to identify the epistemological trends in the Israeli Educational Leadership (EL) scholarship between the years 2000 and 2012. Design/methodology/approach: The 51 studies included in this review were detected through a systematic search in online academic databases. Abstracts of studies identified as being…
The Design and Product of National 1:1000000 Cartographic Data of Topographic Map
NASA Astrophysics Data System (ADS)
Wang, Guizhi
2016-06-01
National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.
Prevalence of physical inactivity in Iran: a systematic review.
Fakhrzadeh, Hossein; Djalalinia, Shirin; Mirarefin, Mojdeh; Arefirad, Tahereh; Asayesh, Hamid; Safiri, Saeid; Samami, Elham; Mansourian, Morteza; Shamsizadeh, Morteza; Qorbani, Mostafa
2016-01-01
Introduction: Physical inactivity is one of the most important risk factors for chronic diseases, including cardiovascular disease, cancer, and stroke. We aim to conduct a systematic review of the prevalence of physical inactivity in Iran. Methods: We searched international databases; ISI, PubMed/Medline, Scopus, and national databases Irandoc, Barakat knowledge network system, and Scientific Information Database (SID). We collected data for outcome measures of prevalence of physical inactivity by sex, age, province, and year. Quality assessment and data extraction has been conducted independently by two independent research experts. There were no limitations for time and language. Results: We analyzed data for prevalence of physical inactivity in Iranian population. According to our search strategy we found 254 records; of them 185 were from international databases and the remaining 69 were obtained from national databases after refining the data, 34 articles that met eligible criteria remained for data extraction. From them respectively; 9, 20, 2 and 3 studies were at national, provincial, regional and local levels. The estimates for inactivity ranged from approximately 30% to almost 70% and had considerable variation between sexes and studied sub-groups. Conclusion: In Iran, most of studies reported high prevalence of physical inactivity. Our findings reveal a heterogeneity of reported values, often from differences in study design, measurement tools and methods, different target groups and sub-population sampling. These data do not provide the possibility of aggregation of data for a comprehensive inference.
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Concepts and data model for a co-operative neurovascular database.
Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J
2001-08-01
Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Measurements of dynamic and resilient moduli of roadway test sites.
DOT National Transportation Integrated Search
2013-12-01
This study developed a material input library of dynamic and resilient moduli of local : pavement materials for the Mechanistic Empirical Pavement Design Guide (MEPDG) : implementation in Georgia. A database includes: 1) dynamic moduli of asphalt con...
NASA Technical Reports Server (NTRS)
Shearrow, Charles A.
1999-01-01
One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
Yolasığmaz, Hacı Ahmet; Keleş, Sedat
2009-01-01
In Turkey, the understanding of planning focused on timber production has given its place on Multiple Use Management (MUM). Because the whole infrastructure of forestry with inventory system leading the way depends on timber production, some cases of bottle neck are expected during the transition period. Database design, probably the most important stage during the transition to MUM, together with the digital basic maps making up the basis of this infrastructure constitute the main point of this article. Firstly, the forest management philosophy of Turkey in the past was shortly touched upon in the article. Ecosystem Based Multiple Use Forest Management (EBMUFM) approaches was briefly introduced. The second stage of the process of EBMUFM, database design was described by examining the classical planning infrastructure and the coverage to be produced and consumed were suggested in the form of lists. At the application stage, two different geographical databases were established with GIS in Balcı Planning Unit of the years 1984 and 2006. Following that the related basic maps are produced. Timely diversity of the planning unit of 20 years is put forward comparatively with regard to the stand parameters such as tree types, age class, development stage, canopy closure, mixture, volume and increment.
DOT National Transportation Integrated Search
2017-03-01
This research explored the second Strategic Highway Research Program (SHRP2) Naturalistic Driving Study (NDS) database for the potential to identify freeway entrance and exit ramps and teen drivers behavior while traveling those ramps. This is in ...
Reducing Rape-Myth Acceptance in Male College Students: A Meta-Analysis of Intervention Studies.
ERIC Educational Resources Information Center
Flores, Stephen A.; Hartlaub, Mark G.
1998-01-01
Studies evaluating interventions designed to reduce rape-supportive beliefs are examined to identify effective strategies. Searches were conducted on several databases from 1980 to present. Results indicate that human-sexuality courses, workshops, video interventions, and other formats appear to be successful strategies, although these…
National Institute of Standards and Technology Data Gateway
SRD 60 NIST ITS-90 Thermocouple Database (Web, free access) Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).
Ureteral endometriosis: A systematic literature review
Palla, Viktoria-Varvara; Karaolanis, Georgios; Katafigiotis, Ioannis; Anastasiou, Ioannis
2017-01-01
Introduction: Ureteral endometriosis is a rare disease affecting women of childbearing age which presents with nonspecific symptoms and it may result in severe morbidity. The aim of this study was to review evidence about incidence, pathogenesis, clinical presentation, diagnosis, and management of ureteral endometriosis. Materials and Methods: PubMed Central database was searched to identify studies reporting cases of ureteral endometriosis. “Ureter” or “Ureteral” and “Endometriosis” were used as key words. Database was searched for articles published since 1996, in English without restrictions regarding the study design. Results: From 420 studies obtained through database search, 104 articles were finally included in this review, including a total of 1384 patients with ureteral endometriosis. Data regarding age, location, pathological findings, and interventions were extracted. Mean patients' age was 38.6 years, whereas the therapeutic arsenal included hormonal, endoscopic, and/or surgical treatment. Conclusions: Ureteral endometriosis represents a diagnostic and therapeutic challenge for the clinicians and high clinical suspicion is needed to identify it. PMID:29021650
Significance of genome-wide association studies in molecular anthropology.
Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal
2009-12-01
The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.
Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information
2017-11-01
Threat Information by Decetria Akole and Michael Chen Approved for public release; distribution is unlimited...Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information by Decetria Akole The Thurgood Marshall...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
Rapid HIS, RIS, PACS Integration Using Graphical CASE Tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Breant, Claudine M.; Stepczyk, Frank M.; Kho, Hwa T.; Valentino, Daniel J.; Tashima, Gregory H.; Materna, Anthony T.
1994-05-01
We describe the clinical requirements of the integrated federation of databases and present our client-mediator-server design. The main body of the paper describes five important aspects of integrating information systems: (1) global schema design, (2) establishing sessions with remote database servers, (3) development of schema translators, (4) integration of global system triggers, and (5) development of job workflow scripts.
Evaluation models and criteria of the quality of hospital websites: a systematic review study
Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar
2017-01-01
Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. Results To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page’s attractions have been mentioned in the majority of studies. Conclusion The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential. PMID:28465807
Evaluation models and criteria of the quality of hospital websites: a systematic review study.
Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar
2017-02-01
Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page's attractions have been mentioned in the majority of studies. The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Feeding Interventions for Children with Cerebral Palsy: A Review of the Evidence
ERIC Educational Resources Information Center
Snider, Laurie; Majnemer, Annette; Darsaklis, Vasiliki
2011-01-01
Aim: To examine the evidence of the effectiveness of different feeding interventions for children with cerebral palsy. Methods: A search of 12 electronic databases identified all relevant studies. For each study, the quality of the methods was assessed according to the study design. A total of 33 articles were retrieved, and 21 studies were…
Using Large Diabetes Databases for Research.
Wild, Sarah; Fischbacher, Colin; McKnight, John
2016-09-01
There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.
Chen, Wei; Lewith, George; Wang, Li-qiong; Ren, Jun; Xiong, Wen-jing; Lu, Fang; Liu, Jian-ping
2014-01-01
Chinese proprietary herbal medicines (CPHMs) have long history in China for the treatment of common cold, and lots of them have been listed in the 'China national essential drug list' by the Chinese Ministry of Health. The aim of this review is to provide a well-round clinical evidence assessment on the potential benefits and harms of CPHMs for common cold based on a systematic literature search to justify their clinical use and recommendation. We searched CENTRAL, MEDLINE, EMBASE, SinoMed, CNKI, VIP, China Important Conference Papers Database, China Dissertation Database, and online clinical trial registry websites from their inception to 31 March 2013 for clinical studies of CPHMs listed in the 'China national essential drug list' for common cold. There was no restriction on study design. A total of 33 CPHMs were listed in 'China national essential drug list 2012' for the treatment of common cold but only 7 had supportive clinical evidences. A total of 6 randomised controlled trials (RCTs) and 7 case series (CSs) were included; no other study design was identified. All studies were conducted in China and published in Chinese between 1995 and 2012. All included studies had poor study design and methodological quality, and were graded as very low quality. The use of CPHMs for common cold is not supported by robust evidence. Further rigorous well designed placebo-controlled, randomized trials are needed to substantiate the clinical claims made for CPHMs.
Cybermaterials: materials by design and accelerated insertion of materials
NASA Astrophysics Data System (ADS)
Xiong, Wei; Olson, Gregory B.
2016-02-01
Cybermaterials innovation entails an integration of Materials by Design and accelerated insertion of materials (AIM), which transfers studio ideation into industrial manufacturing. By assembling a hierarchical architecture of integrated computational materials design (ICMD) based on materials genomic fundamental databases, the ICMD mechanistic design models accelerate innovation. We here review progress in the development of linkage models of the process-structure-property-performance paradigm, as well as related design accelerating tools. Extending the materials development capability based on phase-level structural control requires more fundamental investment at the level of the Materials Genome, with focus on improving applicable parametric design models and constructing high-quality databases. Future opportunities in materials genomic research serving both Materials by Design and AIM are addressed.
NASA Technical Reports Server (NTRS)
Nelson, Raymond M.; Willis, Kimberly J.; Daley, William J.; Brumbaugh, Fred R.; Bremer, Jeffrey M.
1992-01-01
All earth-looking photographs acquired by Space Shuttle astronauts are identified, located, and catalogued after each mission. The photographs have been entered into a computerized database at the NASA Johnson Space Center. The database in its two modes - computer and catalog - is organized and presented to provide a scope and level of detail designed to be useful in Earth science activities, resource management, environmental studies, and public affairs. The computerized database can be accessed free through standard communication networks 24 hours a day, and the catalogs are distributed throughout the world. Photograph viewing centers are available in the United States, and photographic copies can be obtained through government-supported centers.
[Data validation methods and discussion on Chinese materia medica resource survey].
Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing
2013-07-01
From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.
Mathematical models for exploring different aspects of genotoxicity and carcinogenicity databases.
Benigni, R; Giuliani, A
1991-12-01
One great obstacle to understanding and using the information contained in the genotoxicity and carcinogenicity databases is the very size of such databases. Their vastness makes them difficult to read; this leads to inadequate exploitation of the information, which becomes costly in terms of time, labor, and money. In its search for adequate approaches to the problem, the scientific community has, curiously, almost entirely neglected an existent series of very powerful methods of data analysis: the multivariate data analysis techniques. These methods were specifically designed for exploring large data sets. This paper presents the multivariate techniques and reports a number of applications to genotoxicity problems. These studies show how biology and mathematical modeling can be combined and how successful this combination is.
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
Longitudinal data for interdisciplinary ageing research. Design of the Linnaeus Database.
Malmberg, Gunnar; Nilsson, Lars-Göran; Weinehall, Lars
2010-11-01
To allow for interdisciplinary research on the relations between socioeconomic conditions and health in the ageing population, a new anonymized longitudinal database - the Linnaeus Database - has been developed at the Centre for Population Studies at Umeå University. This paper presents the database and its research potential. Using the Swedish personal numbers the researchers have, in collaboration with Statistics Sweden and the National Board for Health and Welfare, linked individual records from Swedish register data on death causes, hospitalization and various socioeconomic conditions with two databases - Betula and VIP (Västerbottens Intervention Programme) - previously developed by the researchers at Umeå University. Whereas Betula includes rich information about e.g. cognitive functions, VIP contains information about e.g. lifestyle and health indicators. The Linnaeus Database includes annually updated socioeconomic information from Statistics Sweden registers for all registered residents of Sweden for the period 1990 to 2006, in total 12,066,478. The information from the Betula includes 4,500 participants from the city of Umeå and VIP includes data for almost 90,000 participants. Both datasets include cross-sectional as well as longitudinal information. Due to the coverage and rich information, the Linnaeus Database allows for a variety of longitudinal studies on the relations between, for instance, socioeconomic conditions, health, lifestyle, cognition, family networks, migration and working conditions in ageing cohorts. By joining various datasets developed in different disciplinary traditions new possibilities for interdisciplinary research on ageing emerge.
Fashion sketch design by interactive genetic algorithms
NASA Astrophysics Data System (ADS)
Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.
2012-11-01
Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.
InverPep: A database of invertebrate antimicrobial peptides.
Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio
2017-03-01
The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.
Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft
NASA Technical Reports Server (NTRS)
Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.
2010-01-01
Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
The Philip Morris Information Network: A Library Database on an In-House Timesharing System.
ERIC Educational Resources Information Center
DeBardeleben, Marian Z.; And Others
1983-01-01
Outlines a database constructed at Philip Morris Research Center Library which encompasses holdings and circulation and acquisitions records for all items in the library. Host computer (DECSYSTEM-2060), software (BASIC), database design, search methodology, cataloging, and accessibility are noted; sample search, circ-in profile, end user profiles,…
ERIC Educational Resources Information Center
Hoffman, Tony
Sophisticated database management systems (DBMS) for microcomputers are becoming increasingly easy to use, allowing small school districts to develop their own autonomous databases for tracking enrollment and student progress in special education. DBMS applications can be designed for maintenance by district personnel with little technical…
Web Database Development: Implications for Academic Publishing.
ERIC Educational Resources Information Center
Fernekes, Bob
This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…
NASA Astrophysics Data System (ADS)
Davis, Justin; Howard, Hillari; Hoover, Richard B.; Sabanayagam, Chandran R.
2010-09-01
Extremophiles are microorganisms that have adapted to severe conditions that were once considered devoid of life. The extreme settings in which these organisms flourish on Earth resemble many extraterrestrial environments. Identification and classification of extremophiles in situ (without the requirement for excessive handling and processing) can provide a basis for designing remotely operated instruments for extraterrestrial life exploration. An important consideration when designing such experiments is to prevent contamination of the environments. We are developing a reference spectral database of autofluorescence from microbial extremophiles using long-UV excitation (408 nm). Aromatic compounds are essential components of living systems, and biological molecules such as aromatic amino acids, nucleotides, porphyrins and vitamins can also exhibit fluorescence under long-UV excitation conditions. Autofluorescence spectra were obtained from a light microscope that additionally allowed observations of microbial geometry and motility. It was observed that all extremophiles studied displayed an autofluorescence peak at around 470 nm, followed by a long decay that was species specific. The autofluorescence database can potentially be used as a reference to identify and classify past or present microbial life in our solar system.
NASA Technical Reports Server (NTRS)
Sabanayagam, Chandran; Howard, Hillari; Hoover, Richard B.
2010-01-01
Extremophiles are microorganisms that have adapted to severe conditions that were once considered devoid of life. The extreme settings in which these organisms flourish on earth resemble many extraterrestrial environments. Identification and classification of extremophiles in situ (without the requirement for excessive handling and processing) can provide a basis for designing remotely operated instruments for extraterrestrial life exploration. An important consideration when designing such experiments is to prevent contamination of the environments. We are developing a reference spectral database of autofluorescence from microbial extremophiles using long-UV excitation (405 nm). Aromatic compounds are essential components of living systems, and biological molecules such as aromatic amino acids, nucleotides, porphyrins and vitamins can also exhibit fluorescence under long-UV excitation conditions. Autofluorescence spectra were obtained from a confocal microscope that additionally allowed observations of microbial geometry and motility. It was observed that all extremophiles studied displayed an autofluorescence peak at around 470 nm, followed by a long decay that was species specific. The autofluorescence database can potentially be used as a reference to identify and classify past or present microbial life in our solar system.
NASA Technical Reports Server (NTRS)
Steeman, Gerald; Connell, Christopher
2000-01-01
Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
Feasibility study for a microwave-powered ozone sniffer aircraft, volume 2
NASA Technical Reports Server (NTRS)
1990-01-01
Using 3-D design techniques and the Advanced Surface Design Software on the Computervision Designer V-X Interactive Graphics System, the aircraft configuration was created. The canard, tail, vertical tail, and main wing were created on the system using Wing Generator, a Computervision based program introduced in Appendix A.2. The individual components of the plane were created separately and were later individually imported to the master database. An isometric view of the final configuration is presented.
NASA Technical Reports Server (NTRS)
1986-01-01
Lockheed Missiles and Space Company's conceptual designs and programmatics for a Space Station Nonhuman Life Sciences Research Facility (LSRF) are presented. Conceptual designs and programmatics encompass an Initial Orbital Capability (IOC) LSRF, a growth or follow-on Orbital Capability (FOC), and the transitional process required to modify the IOC LSFR to the FOC LSFR. The IOC and FOC LSFRs correspond to missions SAAX0307 and SAAX0302 of the Space Station Mission Requirements Database, respectively.
The Structural Ceramics Database: Technical Foundations
Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.
1989-01-01
The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397
van Wieren-de Wijer, Diane B M A; Maitland-van der Zee, Anke-Hilse; de Boer, Anthonius; Stricker, Bruno H Ch; Kroon, Abraham A; de Leeuw, Peter W; Bozkurt, O; Klungel, Olaf H
2009-04-01
To describe the design, recruitment and baseline characteristics of participants in a community pharmacy based pharmacogenetic study of antihypertensive drug treatment. Participants enrolled from the population-based Pharmaco-Morbidity Record Linkage System. We designed a nested case-control study in which we will assess whether specific genetic polymorphisms modify the effect of antihypertensive drugs on the risk of myocardial infarction. In this study, cases (myocardial infarction) and controls were recruited through community pharmacies that participate in PHARMO. The PHARMO database comprises drug dispensing histories of about 2,000,000 subjects from a representative sample of Dutch community pharmacies linked to the national registrations of hospital discharges. In total we selected 31010 patients (2777 cases and 28233 controls) from the PHARMO database, of whom 15973 (1871 cases, 14102 controls) were approached through their community pharmacy. Overall response rate was 36.3% (n = 5791, 794 cases, 4997 controls), whereas 32.1% (n = 5126, 701 cases, 4425 controls) gave informed consent to genotype their DNA. As expected, several cardiovascular risk factors such as smoking, body mass index, hypercholesterolemia, and diabetes mellitus were more common in cases than in controls. Furthermore, cases more often used beta-blockers and calcium-antagonists, whereas controls more often used thiazide diuretics, ACE-inhibitors, and angiotensin-II receptor blockers. We have demonstrated that it is feasible to select patients from a coded database for a pharmacogenetic study and to approach them through community pharmacies, achieving reasonable response rates and without violating privacy rules.
Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang
2012-08-01
Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.
Clement, Fiona; Zimmer, Scott; Dixon, Elijah; Ball, Chad G.; Heitman, Steven J.; Swain, Mark; Ghosh, Subrata
2016-01-01
Importance At the turn of the 21st century, studies evaluating the change in incidence of appendicitis over time have reported inconsistent findings. Objectives We compared the differences in the incidence of appendicitis derived from a pathology registry versus an administrative database in order to validate coding in administrative databases and establish temporal trends in the incidence of appendicitis. Design We conducted a population-based comparative cohort study to identify all individuals with appendicitis from 2000 to2008. Setting & Participants Two population-based data sources were used to identify cases of appendicitis: 1) a pathology registry (n = 8,822); and 2) a hospital discharge abstract database (n = 10,453). Intervention & Main Outcome The administrative database was compared to the pathology registry for the following a priori analyses: 1) to calculate the positive predictive value (PPV) of administrative codes; 2) to compare the annual incidence of appendicitis; and 3) to assess differences in temporal trends. Temporal trends were assessed using a generalized linear model that assumed a Poisson distribution and reported as an annual percent change (APC) with 95% confidence intervals (CI). Analyses were stratified by perforated and non-perforated appendicitis. Results The administrative database (PPV = 83.0%) overestimated the incidence of appendicitis (100.3 per 100,000) when compared to the pathology registry (84.2 per 100,000). Codes for perforated appendicitis were not reliable (PPV = 52.4%) leading to overestimation in the incidence of perforated appendicitis in the administrative database (34.8 per 100,000) as compared to the pathology registry (19.4 per 100,000). The incidence of appendicitis significantly increased over time in both the administrative database (APC = 2.1%; 95% CI: 1.3, 2.8) and pathology registry (APC = 4.1; 95% CI: 3.1, 5.0). Conclusion & Relevance The administrative database overestimated the incidence of appendicitis, particularly among perforated appendicitis. Therefore, studies utilizing administrative data to analyze perforated appendicitis should be interpreted cautiously. PMID:27820826
Use of patient safety culture instruments in operating rooms: A systematic literature review.
Zhao, Pujng; Li, Yaqin; Li, Zhi; Jia, Pengli; Zhang, Longhao; Zhang, Mingming
2017-05-01
To identify and qualitatively describe, in a literature review, how the instruments were used to evaluate patient safety culture in the operating rooms of published studies. Systematic searches of the literature were conducted using the major database including MEDLINE, EMbase, The Cochrane Library, and four Chinese databases including Chinese Biomedical Literature Database (CBM), Wanfang Data, Chinese Scientific Journal Database (VIP), and Chinese Journals Full-text Database (CNKI) for studies published up to March 2016. We summarized and analyzed the country scope, the instrument utilized in the study, the year when the instrument was used, and fields of operating rooms. Study populations, study settings, and the time span between baseline and follow-up phase were evaluated according to the study design. We identified 1025 references, of which 99 were obtained for full-text assessment; 47 of these studies were deemed relevant and included in the literature review. Most of the studies were from the USA. The most commonly used patient safety culture instrument was Safety Attitude Questionnaire. All identified instruments were used after 2002 and across many fields. Most included studies on patient safety culture were conducted in teaching hospitals or university hospitals. The study population in the cross-sectional studies was much more than that in the before-after studies. The time span between baseline and follow-up phase of before-after studies were almost over three months. Although patient safety culture is considered important in health care and patient safety, the number of studies in which patient safety culture has been estimated using the instruments in operating rooms, is fairly small. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Naval Ship Database: Database Design, Implementation, and Schema
2013-09-01
incoming data. The solution allows database users to store and analyze data collected by navy ships in the Royal Canadian Navy ( RCN ). The data...understanding RCN jargon and common practices on a typical RCN vessel. This experience led to the development of several error detection methods to...data to be stored in the database. Mr. Massel has also collected data pertaining to day to day activities on RCN vessels that has been imported into
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre, Robert E.; Orcutt, John M.; Decker, Ryan K.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER is one of the most heavily instrumented sites in the United States measuring various atmospheric parameters on a continuous basis. An inherent challenge with the large databases that EV44 receives from the ER consists of ensuring erroneous data are removed from the databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments; however, no standard QC procedures for all databases currently exist resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build flags within the meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC checks are described. The flagged data points will be plotted in a graphical user interface (GUI) as part of a manual confirmation that the flagged data do indeed need to be removed from the archive. As the rate of launches increases with additional launch vehicle programs, more emphasis is being placed to continually update and check weather databases for data quality before use in launch vehicle design and certification analyses.
Information sources for obesity prevention policy research: a review of systematic reviews.
Hanneke, Rosie; Young, Sabrina K
2017-08-08
Systematic identification of evidence in health policy can be time-consuming and challenging. This study examines three questions pertaining to systematic reviews on obesity prevention policy, in order to identify the most efficient search methods: (1) What percentage of the primary studies selected for inclusion in the reviews originated in scholarly as opposed to gray literature? (2) How much of the primary scholarly literature in this topic area is indexed in PubMed/MEDLINE? (3) Which databases index the greatest number of primary studies not indexed in PubMed, and are these databases searched consistently across systematic reviews? We identified systematic reviews on obesity prevention policy and explored their search methods and citations. We determined the percentage of scholarly vs. gray literature cited, the most frequently cited journals, and whether each primary study was indexed in PubMed. We searched 21 databases for all primary study articles not indexed in PubMed to determine which database(s) indexed the highest number of these relevant articles. In total, 21 systematic reviews were identified. Ten of the 21 systematic reviews reported searching gray literature, and 12 reviews ultimately included gray literature in their analyses. Scholarly articles accounted for 577 of the 649 total primary study papers. Of these, 495 (76%) were indexed in PubMed. Google Scholar retrieved the highest number of the remaining 82 non-PubMed scholarly articles, followed by Scopus and EconLit. The Journal of the American Dietetic Association was the most-cited journal. Researchers can maximize search efficiency by searching a small yet targeted selection of both scholarly and gray literature resources. A highly sensitive search of PubMed and those databases that index the greatest number of relevant articles not indexed in PubMed, namely multidisciplinary and economics databases, could save considerable time and effort. When combined with a gray literature search and additional search methods, including cited reference searching and consulting with experts, this approach could help maintain broad retrieval of relevant studies while improving search efficiency. Findings also have implications for designing specialized databases for public health research.
Yao, Qingqiang; Wei, Bo; Guo, Yang; Jin, Chengzhe; Du, Xiaotao; Yan, Chao; Yan, Junwei; Hu, Wenhao; Xu, Yan; Zhou, Zhi; Wang, Yijin; Wang, Liming
2015-01-01
The study aims to investigate the techniques of design and construction of CT 3D reconstructional data-based polycaprolactone (PCL)-hydroxyapatite (HA) scaffold. Femoral and lumbar spinal specimens of eight male New Zealand white rabbits were performed CT and laser scanning data-based 3D printing scaffold processing using PCL-HA powder. Each group was performed eight scaffolds. The CAD-based 3D printed porous cylindrical stents were 16 piece × 3 groups, including the orthogonal scaffold, the Pozi-hole scaffold and the triangular hole scaffold. The gross forms, fiber scaffold diameters and porosities of the scaffolds were measured, and the mechanical testing was performed towards eight pieces of the three kinds of cylindrical scaffolds, respectively. The loading force, deformation, maximum-affordable pressure and deformation value were recorded. The pore-connection rate of each scaffold was 100 % within each group, there was no significant difference in the gross parameters and micro-structural parameters of each scaffold when compared with the design values (P > 0.05). There was no significant difference in the loading force, deformation and deformation value under the maximum-affordable pressure of the three different cylinder scaffolds when the load was above 320 N. The combination of CT and CAD reverse technology could accomplish the design and manufacturing of complex bone tissue engineering scaffolds, with no significant difference in the impacts of the microstructures towards the physical properties of different porous scaffolds under large load.
C3I system modification and EMC (electromagnetic compatibility) methodology, volume 1
NASA Astrophysics Data System (ADS)
Wilson, J. L.; Jolly, M. B.
1984-01-01
A methodology (i.e., consistent set of procedures) for assessing the electromagnetic compatibility (EMC) of RF subsystem modifications on C3I aircraft was generated during this study (Volume 1). An IEMCAP (Intrasystem Electromagnetic Compatibility Analysis Program) database for the E-3A (AWACS) C3I aircraft RF subsystem was extracted to support the design of the EMC assessment methodology (Volume 2). Mock modifications were performed on the E-3A database to assess, using a preliminary form of the methodology, the resulting EMC impact. Application of the preliminary assessment methodology to modifications in the E-3A database served to fine tune the form of a final assessment methodology. The resulting final assessment methodology is documented in this report in conjunction with the overall study goals, procedures, and database. It is recommended that a similar EMC assessment methodology be developed for the power subsystem within C3I aircraft. It is further recommended that future EMC assessment methodologies be developed around expert systems (i.e., computer intelligent agents) to control both the excruciating detail and user requirement for transparency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rayl, K.D.; Gaasterland, T.
This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more completemore » ``user guides`` are included, along with general information on where to search for a new database.« less
National Institute of Standards and Technology Data Gateway
SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase) This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.
1983-10-01
Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each
Using the structure-function linkage database to characterize functional domains in enzymes.
Brown, Shoshana; Babbitt, Patricia
2014-12-12
The Structure-Function Linkage Database (SFLD; http://sfld.rbvi.ucsf.edu/) is a Web-accessible database designed to link enzyme sequence, structure, and functional information. This unit describes the protocols by which a user may query the database to predict the function of uncharacterized enzymes and to correct misannotated functional assignments. The information in this unit is especially useful in helping a user discriminate functional capabilities of a sequence that is only distantly related to characterized sequences in publicly available databases. Copyright © 2014 John Wiley & Sons, Inc.
TREATABILITY DATABASE DESCRIPTION
The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...
McVicar, Andrew; Greenwood, Christina; Ellis, Carol; LeForis, Chantelle
2016-09-01
Interpretation of the efficacy of reflexology is hindered by inconsistent research designs and complicated by professional views that criteria of randomized controlled trials (RCTs)are not ideal to research holistic complementary and alternative medicine practice. The influence of research designs on study outcomes is not known. This integrative review sought to evaluate this possibility. Thirty-seven interventional studies (2000-2014) were identified; they had RCT or non-RCT design and compared reflexology outcomes against a control/comparison group. Viability of integrating RCT and non-RCT studies into a single database was first evaluated by appraisal of 16 reporting fields related to study setting and objectives, sample demographics, methodologic design, and treatment fidelity and assessment against Jadad score quality criteria for RCTs. For appraisal, the database was stratified into RCT/non-RCT or Jadad score of 3 or more or less than 3. Deficits in reporting were identified for blind assignment of participants, dropout/completion rate, and School of Reflexology. For comparison purposes, these fields were excluded from subsequent analysis for evidence of association between design fields and of fields with study outcomes. Thirty-one studies applied psychometric tools and 20 applied biometric tools (14 applied both). A total of 116 measures were used. Type of measure was associated with study objectives (p < 0.001; chi-square), in particular of psychometric measures with a collated "behavioral/cognitive" objective. Significant outcomes were more likely (p < 0.001; chi-square) for psychometric than for biometric measures. Neither type of outcome was associated with choice of RCT or non-RCT method, but psychometric responses were associated (p = 0.007) with a nonmassage control strategy. The review supports psychometric responses to reflexology when study design uses a nonmassage control strategy. Findings suggest that an evaluation of outcomes against sham reflexology massage and other forms of massage, as well as a narrower focus of study objective, may clarify whether there is a relationship between study design and efficacy of reflexology.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia
NASA Astrophysics Data System (ADS)
Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.
2016-09-01
Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.
Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades
2014-01-01
microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett
2012-01-01
The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.
Spine device clinical trials: design and sponsorship.
Cher, Daniel J; Capobianco, Robyn A
2015-05-01
Multicenter prospective randomized clinical trials represent the best evidence to support the safety and effectiveness of medical devices. Industry sponsorship of multicenter clinical trials is purported to lead to bias. To determine what proportion of spine device-related trials are industry-sponsored and the effect of industry sponsorship on trial design. Analysis of data from a publicly available clinical trials database. Clinical trials of spine devices registered on ClinicalTrials.gov, a publicly accessible trial database, were evaluated in terms of design, number and location of study centers, and sample size. The relationship between trial design characteristics and study sponsorship was evaluated using logistic regression and general linear models. One thousand six hundred thrity-eight studies were retrieved from ClinicalTrials.gov using the search term "spine." Of the 367 trials that focused on spine surgery, 200 (54.5%) specifically studied devices for spine surgery and 167 (45.5%) focused on other issues related to spine surgery. Compared with nondevice trials, device trials were far more likely to be sponsored by the industry (74% vs. 22.2%, odds ratio (OR) 9.9 [95% confidence interval 6.1-16.3]). Industry-sponsored device trials were more likely multicenter (80% vs. 29%, OR 9.8 [4.8-21.1]) and had approximately four times as many participating study centers (p<.0001) and larger sample sizes. There were very few US-based multicenter randomized trials of spine devices not sponsored by the industry. Most device-related spine research is industry-sponsored. Multicenter trials are more likely to be industry-sponsored. These findings suggest that previously published studies showing larger effect sizes in industry-sponsored vs. nonindustry-sponsored studies may be biased as a result of failure to take into account the marked differences in design and purpose. Copyright © 2015 Elsevier Inc. All rights reserved.
Protecting patient privacy by quantifiable control of disclosures in disseminated databases.
Ohno-Machado, Lucila; Silveira, Paulo Sérgio Panse; Vinterbo, Staal
2004-08-01
One of the fundamental rights of patients is to have their privacy protected by health care organizations, so that information that can be used to identify a particular individual is not used to reveal sensitive patient data such as diagnoses, reasons for ordering tests, test results, etc. A common practice is to remove sensitive data from databases that are disseminated to the public, but this can make the disseminated database useless for important public health purposes. If the degree of anonymity of a disseminated data set could be measured, it would be possible to design algorithms that can assure that the desired level of confidentiality is achieved. Privacy protection in disseminated databases can be facilitated by the use of special ambiguation algorithms. Most of these algorithms are aimed at making one individual indistinguishable from one or more of his peers. However, even in databases considered "anonymous", it may still be possible to obtain sensitive information about some individuals or groups of individuals with the use of pattern recognition algorithms. In this article, we study the problem of determining the degree of ambiguation in disseminated databases and discuss its implications in the development and testing of "anonymization" algorithms.
Menditto, Enrica; Bolufer De Gea, Angela; Cahir, Caitriona; Marengoni, Alessandra; Riegler, Salvatore; Fico, Giuseppe; Costa, Elisio; Monaco, Alessandro; Pecorelli, Sergio; Pani, Luca; Prados-Torres, Alexandra
2016-01-01
Computerized health care databases have been widely described as an excellent opportunity for research. The availability of "big data" has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on "adherence to prescription and medical plans" identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners with the aim of improving data sharing on a European level. A total of six databases belonging to three different European countries (Spain, Republic of Ireland, and Italy) were included in the analysis. Preliminary results suggest that there are some similarities. However, these results should be applied in different contexts and European countries, supporting the idea that large European studies should be designed in order to get the most of already available databases.
nStudy: A System for Researching Information Problem Solving
ERIC Educational Resources Information Center
Winne, Philip H.; Nesbit, John C.; Popowich, Fred
2017-01-01
A bottleneck in gathering big data about learning is instrumentation designed to record data about processes students use to learn and information on which those processes operate. The software system nStudy fills this gap. nStudy is an extension to the Chrome web browser plus a server side database for logged trace data plus peripheral modules…
ERIC Educational Resources Information Center
van der Meer, Larah; Sigafoos, Jeff; O'Reilly, Mark F.; Lancioni, Giulio E.
2011-01-01
We synthesized studies that assessed preference for using different augmentative and alternative communication (AAC) options. Studies were identified via systematic searches of electronic databases, journals, and reference lists. Studies were evaluated in terms of: (a) participants, (b) setting, (c) communication options assessed, (d) design, (e)…
Social Networking as a Platform for Role-Playing Scientific Case Studies
ERIC Educational Resources Information Center
Geyer, Andrea M.
2014-01-01
This work discusses the design and implementation of two online case studies in a face-to-face general chemistry course. The case studies were integrated into the course to emphasize the need for science literacy in general society, to enhance critical thinking, to introduce database searching, and to improve primary literature reading skills. An…
Generation of signature databases with fast codes
NASA Astrophysics Data System (ADS)
Bradford, Robert A.; Woodling, Arthur E.; Brazzell, James S.
1990-09-01
Using the FASTSIG signature code to generate optical signature databases for the Ground-based Surveillance and Traking System (GSTS) Program has improved the efficiency of the database generation process. The goal of the current GSTS database is to provide standardized, threat representative target signatures that can easily be used for acquisition and trk studies, discrimination algorithm development, and system simulations. Large databases, with as many as eight interpolalion parameters, are required to maintain the fidelity demands of discrimination and to generalize their application to other strateg systems. As the need increases for quick availability of long wave infrared (LWIR) target signatures for an evolving design4o-threat, FASTSIG has become a database generation alternative to using the industry standard OptiCal Signatures Code (OSC). FASTSIG, developed in 1985 to meet the unique strategic systems demands imposed by the discrimination function, has the significant advantage of being a faster running signature code than the OSC, typically requiring two percent of the cpu time. It uses analytical approximations to model axisymmetric targets, with the fidelity required for discrimination analysis. Access of the signature database is accomplished through use of the waveband integration and interpolation software, INTEG and SIGNAT. This paper gives details of this procedure as well as sample interpolated signatures and also covers sample verification by comparison to the OSC, in order to establish the fidelity of the FASTSIG generated database.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-01-01
Objective: The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. Background: However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. Material and Methods: A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. Results: With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital’s clinical governance was required to create a database. Conclusion: Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems. PMID:27147804
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2014-01-01
Objective: The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. Background: However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. Material and Methods: A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. Results: With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital’s clinical governance was required to create a database. Conclusion: Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems. PMID:24825933
21 CFR 830.320 - Submission of unique device identification information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Identification Database § 830.320 Submission of unique device identification information. (a) Designation of... Unique Device Identification Database (GUDID) in a format that we can process, review, and archive...
Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H
2013-12-01
Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A Solution on Identification and Rearing Files Insmallhold Pig Farming
NASA Astrophysics Data System (ADS)
Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang
In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.
A Data Analysis Expert System For Large Established Distributed Databases
NASA Astrophysics Data System (ADS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-05-01
The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.
BIRS – Bioterrorism Information Retrieval System
Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar
2013-01-01
Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. Availability The database is freely available at http://www.bioterrorism.biowaves.org PMID:23390356
Materials Databases Infrastructure Constructed by First Principles Calculations: A Review
Lin, Lianshan
2015-10-13
The First Principles calculations, especially the calculation based on High-Throughput Density Functional Theory, have been widely accepted as the major tools in atom scale materials design. The emerging super computers, along with the powerful First Principles calculations, have accumulated hundreds of thousands of crystal and compound records. The exponential growing of computational materials information urges the development of the materials databases, which not only provide unlimited storage for the daily increasing data, but still keep the efficiency in data storage, management, query, presentation and manipulation. This review covers the most cutting edge materials databases in materials design, and their hotmore » applications such as in fuel cells. By comparing the advantages and drawbacks of these high-throughput First Principles materials databases, the optimized computational framework can be identified to fit the needs of fuel cell applications. The further development of high-throughput DFT materials database, which in essence accelerates the materials innovation, is discussed in the summary as well.« less
BμG@Sbase—a microbial gene expression and comparative genomic database
Witney, Adam A.; Waldron, Denise E.; Brooks, Lucy A.; Tyler, Richard H.; Withers, Michael; Stoker, Neil G.; Wren, Brendan W.; Butcher, Philip D.; Hinds, Jason
2012-01-01
The reducing cost of high-throughput functional genomic technologies is creating a deluge of high volume, complex data, placing the burden on bioinformatics resources and tool development. The Bacterial Microarray Group at St George's (BμG@S) has been at the forefront of bacterial microarray design and analysis for over a decade and while serving as a hub of a global network of microbial research groups has developed BμG@Sbase, a microbial gene expression and comparative genomic database. BμG@Sbase (http://bugs.sgul.ac.uk/bugsbase/) is a web-browsable, expertly curated, MIAME-compliant database that stores comprehensive experimental annotation and multiple raw and analysed data formats. Consistent annotation is enabled through a structured set of web forms, which guide the user through the process following a set of best practices and controlled vocabulary. The database currently contains 86 expertly curated publicly available data sets (with a further 124 not yet published) and full annotation information for 59 bacterial microarray designs. The data can be browsed and queried using an explorer-like interface; integrating intuitive tree diagrams to present complex experimental details clearly and concisely. Furthermore the modular design of the database will provide a robust platform for integrating other data types beyond microarrays into a more Systems analysis based future. PMID:21948792
BμG@Sbase--a microbial gene expression and comparative genomic database.
Witney, Adam A; Waldron, Denise E; Brooks, Lucy A; Tyler, Richard H; Withers, Michael; Stoker, Neil G; Wren, Brendan W; Butcher, Philip D; Hinds, Jason
2012-01-01
The reducing cost of high-throughput functional genomic technologies is creating a deluge of high volume, complex data, placing the burden on bioinformatics resources and tool development. The Bacterial Microarray Group at St George's (BμG@S) has been at the forefront of bacterial microarray design and analysis for over a decade and while serving as a hub of a global network of microbial research groups has developed BμG@Sbase, a microbial gene expression and comparative genomic database. BμG@Sbase (http://bugs.sgul.ac.uk/bugsbase/) is a web-browsable, expertly curated, MIAME-compliant database that stores comprehensive experimental annotation and multiple raw and analysed data formats. Consistent annotation is enabled through a structured set of web forms, which guide the user through the process following a set of best practices and controlled vocabulary. The database currently contains 86 expertly curated publicly available data sets (with a further 124 not yet published) and full annotation information for 59 bacterial microarray designs. The data can be browsed and queried using an explorer-like interface; integrating intuitive tree diagrams to present complex experimental details clearly and concisely. Furthermore the modular design of the database will provide a robust platform for integrating other data types beyond microarrays into a more Systems analysis based future.
Design considerations, architecture, and use of the Mini-Sentinel distributed data system.
Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S
2012-01-01
We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.
We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...
Data-Based Decision-Making: Developing a Method for Capturing Teachers' Understanding of CBM Graphs
ERIC Educational Resources Information Center
Espin, Christine A.; Wayman, Miya Miura; Deno, Stanley L.; McMaster, Kristen L.; de Rooij, Mark
2017-01-01
In this special issue, we explore the decision-making aspect of "data-based decision-making". The articles in the issue address a wide range of research questions, designs, methods, and analyses, but all focus on data-based decision-making for students with learning difficulties. In this first article, we introduce the topic of…
The Technology Education Graduate Research Database, 1892-2000. CTTE Monograph.
ERIC Educational Resources Information Center
Reed, Philip A., Ed.
The Technology Education Graduate Research Database (TEGRD) was designed in two parts. The first part was a 384 page bibliography of theses and dissertations from 1892-2000. The second part was an online, searchable database of graduate research completed within technology education from 1892 to the present. The primary goals of the project were:…
An analysis of orthopaedic theses in Turkey: Evidence levels and publication rates.
Koca, Kenan; Ekinci, Safak; Akpancar, Serkan; Gemci, Muhammed Hanifi; Erşen, Ömer; Akyıldız, Faruk
2016-10-01
The aim of this study was to present characteristics and publication patterns of studies arise from orthopedic theses obtained from National Thesis Center; database in terms of publication years, study types, topics, level of evidence between 1974 and 2014. Firstly, National Thesis Center database was searched for orthopedics and Traumatology theses. The theses, which their summary or full text were available were included in the study. The topics, study types and quality of study designs were reviewed. Then theses were searched in the PubMed database. Journals of published theses were classified according to category, scope and impact factors of the year 2014. 1508 theses were included into the study. Clinical studies comprised 71,7% of the theses, while 25,6% of the theses were non-clinical experimental and 2,7% of the theses were observational studies. Clinical studies were Level I in 8,6% (n = 93) and Level II in 5,8% of the theses (n = 63). A total of 224 theses (14,9%) were published in the journals indexed in PubMed database from 1974 to 2012. Fifty-two (23,2%) were published in SCI; 136 theses (60,7%) were published in SCI-E journals and 36 theses (16%) were published in other Journals indexed in PubMed. The quantity and quality of published theses need to be improved and effective measures should be taken to promote quality of theses. Theses from universities and Training hospitals which did not allow open access, and; incomplete records of the National Thesis Center database were major limitations of this study. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.
Effectiveness of Occupational Health and Safety Training: A Systematic Review with Meta-Analysis
ERIC Educational Resources Information Center
Ricci, Federico; Chiesi, Andrea; Bisio, Carlo; Panari, Chiara; Pelosi, Annalisa
2016-01-01
Purpose: This meta-analysis aims to verify the efficacy of occupational health and safety (OHS) training in terms of knowledge, attitude and beliefs, behavior and health. Design/methodology/approach: The authors included studies published in English (2007-2014) selected from ten databases. Eligibility criteria were studies concerned with the…
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-12-01
To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Convergent case study mixed methods design. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. © Health Research and Educational Trust.
Gallagher, Sarah A; Smith, Angela B; Matthews, Jonathan E; Potter, Clarence W; Woods, Michael E; Raynor, Mathew; Wallen, Eric M; Rathmell, W Kimryn; Whang, Young E; Kim, William Y; Godley, Paul A; Chen, Ronald C; Wang, Andrew; You, Chaochen; Barocas, Daniel A; Pruthi, Raj S; Nielsen, Matthew E; Milowsky, Matthew I
2014-01-01
The management of genitourinary malignancies requires a multidisciplinary care team composed of urologists, medical oncologists, and radiation oncologists. A genitourinary (GU) oncology clinical database is an invaluable resource for patient care and research. Although electronic medical records provide a single web-based record used for clinical care, billing, and scheduling, information is typically stored in a discipline-specific manner and data extraction is often not applicable to a research setting. A GU oncology database may be used for the development of multidisciplinary treatment plans, analysis of disease-specific practice patterns, and identification of patients for research studies. Despite the potential utility, there are many important considerations that must be addressed when developing and implementing a discipline-specific database. The creation of the GU oncology database including prostate, bladder, and kidney cancers with the identification of necessary variables was facilitated by meetings of stakeholders in medical oncology, urology, and radiation oncology at the University of North Carolina (UNC) at Chapel Hill with a template data dictionary provided by the Department of Urologic Surgery at Vanderbilt University Medical Center. Utilizing Research Electronic Data Capture (REDCap, version 4.14.5), the UNC Genitourinary OncoLogy Database (UNC GOLD) was designed and implemented. The process of designing and implementing a discipline-specific clinical database requires many important considerations. The primary consideration is determining the relationship between the database and the Institutional Review Board (IRB) given the potential applications for both clinical and research uses. Several other necessary steps include ensuring information technology security and federal regulation compliance; determination of a core complete dataset; creation of standard operating procedures; standardizing entry of free text fields; use of data exports, queries, and de-identification strategies; inclusion of individual investigators' data; and strategies for prioritizing specific projects and data entry. A discipline-specific database requires a buy-in from all stakeholders, meticulous development, and data entry resources to generate a unique platform for housing information that may be used for clinical care and research with IRB approval. The steps and issues identified in the development of UNC GOLD provide a process map for others interested in developing a GU oncology database. Copyright © 2014 Elsevier Inc. All rights reserved.
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
The Hawaiian Algal Database: a laboratory LIMS and online resource for biodiversity data
Wang, Norman; Sherwood, Alison R; Kurihara, Akira; Conklin, Kimberly Y; Sauvage, Thomas; Presting, Gernot G
2009-01-01
Background Organization and presentation of biodiversity data is greatly facilitated by databases that are specially designed to allow easy data entry and organized data display. Such databases also have the capacity to serve as Laboratory Information Management Systems (LIMS). The Hawaiian Algal Database was designed to showcase specimens collected from the Hawaiian Archipelago, enabling users around the world to compare their specimens with our photographs and DNA sequence data, and to provide lab personnel with an organizational tool for storing various biodiversity data types. Description We describe the Hawaiian Algal Database, a comprehensive and searchable database containing photographs and micrographs, geo-referenced collecting information, taxonomic checklists and standardized DNA sequence data. All data for individual samples are linked through unique accession numbers. Users can search online for sample information by accession number, numerous levels of taxonomy, or collection site. At the present time the database contains data representing over 2,000 samples of marine, freshwater and terrestrial algae from the Hawaiian Archipelago. These samples are primarily red algae, although other taxa are being added. Conclusion The Hawaiian Algal Database is a digital repository for Hawaiian algal samples and acts as a LIMS for the laboratory. Users can make use of the online search tool to view and download specimen photographs and micrographs, DNA sequences and relevant habitat data, including georeferenced collecting locations. It is publicly available at . PMID:19728892
Application of China's National Forest Continuous Inventory database.
Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing
2011-12-01
The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.
An integrated approach to reservoir modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donaldson, K.
1993-08-01
The purpose of this research is to evaluate the usefulness of the following procedural and analytical methods in investigating the heterogeneity of the oil reserve for the Mississipian Big Injun Sandstone of the Granny Creek field, Clay and Roane counties, West Virginia: (1) relational database, (2) two-dimensional cross sections, (3) true three-dimensional modeling, (4) geohistory analysis, (5) a rule-based expert system, and (6) geographical information systems. The large data set could not be effectively integrated and interpreted without this approach. A relational database was designed to fully integrate three- and four-dimensional data. The database provides an effective means for maintainingmore » and manipulating the data. A two-dimensional cross section program was designed to correlate stratigraphy, depositional environments, porosity, permeability, and petrographic data. This flexible design allows for additional four-dimensional data. Dynamic Graphics[sup [trademark
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G; Herrera-Cuenca, Marianella; Zimberg, Ioná Z; Tucker, Katherine L; Koletzko, Berthold; Pratt, Michael
2015-09-16
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region.
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G.; Herrera-Cuenca, Marianella; Zimberg, Ioná Z.; Tucker, Katherine L.; Koletzko, Berthold; Pratt, Michael
2015-01-01
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region. PMID:26389952
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Common hyperspectral image database design
NASA Astrophysics Data System (ADS)
Tian, Lixun; Liao, Ningfang; Chai, Ali
2009-11-01
This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.
Design of a diagnostic encyclopaedia using AIDA.
van Ginneken, A M; Smeulders, A W; Jansen, W
1987-01-01
Diagnostic Encyclopaedia Workstation (DEW) is the name of a digital encyclopaedia constructed to contain reference knowledge with respect to the pathology of the ovary. Comparing DEW with the common sources of reference knowledge (i.e. books) leads to the following advantages of DEW: it contains more verbal knowledge, pictures and case histories, and it offers information adjusted to the needs of the user. Based on an analysis of the structure of this reference knowledge we have chosen AIDA to develop a relational database and we use a video-disc player to contain the pictorial part of the database. The system consists of a database input version and a read-only run version. The design of the database input version is discussed. Reference knowledge for ovary pathology requires 1-3 Mbytes of memory. At present 15% of this amount is available. The design of the run version is based on an analysis of which information must necessarily be specified to the system by the user to access a desired item of information. Finally, the use of AIDA in constructing DEW is evaluated.
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harwood, R.G.; Billington, C.J.; Buitrago, J.
1996-12-01
A Technical Core Group (TCG) was set up in March 1994 to review the design practice provisions for grouted pile to sleeve connections, mechanical connections and repairs as part of the international harmonization process for the new ISO Standard, ISO 13819-2, Petroleum and Natural Gas Industries--Offshore Structures, Part 2: Fixed Steel Structures. This paper provides an overview of the development of the proposed new design provisions for grouted connections including, the gathering and screening of the data, the evolution of the design formulae, and the evaluation of the resistance factor. Detailed comparisons of the new formulae with current design practicemore » (API, HSE and DnV) are also included. In the development of the new provisions the TCG has been given access to the largest database ever assembled on this topic. This database includes all the major testing programs performed over the last 20 years, and recent UK and Norwegian research projects not previously reported. The limitations in the database are discussed and the areas where future research would be of benefit are highlighted.« less
Small molecule mimics of DFTamP1, a database designed anti-Staphylococcal peptide
Dong, Yuxiang; Lushnikova, Tamara; Golla, Radha M.; Wang, Xiaofang; Wang, Guangshun
2017-01-01
Antimicrobial peptides (AMPs) are important templates for developing new antimicrobial agents. Previously, we developed a database filtering technology that enabled us to design a potent anti-Staphylococcal peptide DFTamP1. Using this same design approach, we now report the discovery of a new class of bis-indole diimidazolines as AMP small molecule mimics. The best compound killed multiple S. aureus clinical strains in both planktonic and biofilm forms. The compound appeared to target bacterial membranes with antimicrobial activity and membrane permeation ability similar to daptomycin. PMID:28011203
Database for the degradation risk assessment of groundwater resources (Southern Italy)
NASA Astrophysics Data System (ADS)
Polemio, M.; Dragone, V.; Mitolo, D.
2003-04-01
The risk characterisation of quality degradation and availability lowering of groundwater resources has been pursued for a wide coastal plain (Basilicata region, Southern Italy), an area covering 40 km along the Ionian Sea and 10 km inland. The quality degradation is due two phenomena: pollution due to discharge of waste water (coming from urban areas) and due to salt pollution, related to seawater intrusion but not only. The availability lowering is due to overexploitation but also due to drought effects. To this purpose the historical data of 1,130 wells have been collected. Wells, homogenously distributed in the area, were the source of geological, stratigraphical, hydrogeological, geochemical data. In order to manage space-related information via a GIS, a database system has been devised to encompass all the surveyed wells and the body of information available per well. Geo-databases were designed to comprise the four types of data collected: a database including geometrical, geological and hydrogeological data on wells (WDB), a database devoted to chemical and physical data on groundwater (CDB), a database including the geotechnical parameters (GDB), a database concering piezometric and hydrological (rainfall, air temperature, river discharge) data (HDB). The record pertaining to each well is identified in these databases by the progressive number of the well itself. Every database is designed as follows: a) the HDB contains 1,158 records, 28 of and 31 fields, mainly describing the geometry of the well and of the stratigraphy; b) the CDB encompasses data about 157 wells, based on which the chemical and physical analyses of groundwater have been carried out. More than one record has been associated with these 157 wells, due to periodic monitoring and analysis; c) the GDB covers 61 wells to which the geotechnical parameters obtained by soil samples taken at various depths; the HDB is designed to permit the analysis of long time series (from 1918) of piezometric data, monitored by more than 60 wells, temperature, rainfall and river discharge data. Based on geo-databases, the geostatistical processing of data has permitted to characterise the degradation risk of groundwater resources of a wide coastal aquifer.
Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon
2008-01-01
Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactions as well as their protein-level interactions using the model cyanobacterium, Synechocystis sp. PCC 6803. It predicts the protein-protein interactions using public interaction databases that contain mutually complementary and redundant data. Furthermore, SynechoNET provides information on transmembrane topology, signal peptide, and domain structure in order to support the analysis of regulatory membrane proteins. Such biological information can be queried and visualized in user-friendly web interfaces that include the interactive network viewer and search pages by keyword and functional category. SynechoNET is an integrated protein-protein interaction database designed to analyze regulatory membrane proteins in cyanobacteria. It provides a platform for biologists to extend the genomic data of cyanobacteria by predicting interaction partners, membrane association, and membrane topology of Synechocystis proteins. SynechoNET is freely available at http://synechocystis.org/ or directly at http://bioportal.kobic.kr/SynechoNET/.
Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell
2015-03-01
Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.
Clinical trial resources on the internet must be designed to reach underrepresented minorities.
Wilson, John J; Mick, Rosemarie; Wei, S Jack; Rustgi, Anil K; Markowitz, Sanford D; Hampshire, Maggie; Metz, James M
2006-01-01
Internet-based clinical trial information services are being developed to increase recruitment to studies. However, there are limited data that evaluate their ability to reach elderly and underrepresented minority populations. This study was designed to evaluate the ability of an established clinical trials registry to reach these populations based on expected Internet use. This study compares general Internet users to participants who enrolled in an Internet based colorectal cancer clinical trials registry established by OncoLink (www.oncolink.org) and the National Colorectal Cancer Research Alliance. Observed rates of demographic groupings were compared to those established for general Internet users. Two thousand, four hundred and thirty-seven participants from the continental United States used the Internet to register for the database. New England, the Mid-Atlantic region, and the Southeast had the highest relative frequency of participation in the database, whereas the Upper Midwest, California, and the South had the lowest rates. Compared to general Internet users, there was an overrepresentation of women (73% vs. 50%) and participants over 55 years old (27% vs. 14%). However, there was an underrepresentation of minorities (10.3% vs. 22%), particularly African Americans (3.1% vs. 8%) and Hispanics (2.8% vs. 9%). The Internet is a growing medium for registry into clinical trials databases. However, even taking into account the selection bias of Internet accessibility, there are still widely disparate demographics between general Internet users and those registering for clinical trials, particularly the underrepresentation of minorities. Internet-based educational and recruitment services for clinical trials must be designed to reach these underrepresented minorities to avoid selection biases in future clinical trials.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Draft secure medical database standard.
Pangalos, George
2002-01-01
Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.
Multidimensional Learner Model In Intelligent Learning System
NASA Astrophysics Data System (ADS)
Deliyska, B.; Rozeva, A.
2009-11-01
The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.