Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Generation of an Aerothermal Data Base for the X33 Spacecraft
NASA Technical Reports Server (NTRS)
Roberts, Cathy; Huynh, Loc
1998-01-01
The X-33 experimental program is a cooperative program between industry and NASA, managed by Lockheed-Martin Skunk Works to develop an experimental vehicle to demonstrate new technologies for a single-stage-to-orbit, fully reusable launch vehicle (RLV). One of the new technologies to be demonstrated is an advanced Thermal Protection System (TPS) being designed by BF Goodrich (formerly Rohr, Inc.) with support from NASA. The calculation of an aerothermal database is crucial to identifying the critical design environment data for the TPS. The NASA Ames X-33 team has generated such a database using Computational Fluid Dynamics (CFD) analyses, engineering analysis methods and various programs to compare and interpolate the results from the CFD and the engineering analyses. This database, along with a program used to query the database, is used extensively by several X-33 team members to help them in designing the X-33. This paper will describe the methods used to generate this database, the program used to query the database, and will show some of the aerothermal analysis results for the X-33 aircraft.
Correlates of Access to Business Research Databases
ERIC Educational Resources Information Center
Gottfried, John C.
2010-01-01
This study examines potential correlates of business research database access through academic libraries serving top business programs in the United States. Results indicate that greater access to research databases is related to enrollment in graduate business programs, but not to overall enrollment or status as a public or private institution.…
The Marshall Islands Data Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoker, A.C.; Conrado, C.L.
1995-09-01
This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less
Bohl, Daniel D; Russo, Glenn S; Basques, Bryce A; Golinvaux, Nicholas S; Fu, Michael C; Long, William D; Grauer, Jonathan N
2014-12-03
There has been an increasing use of national databases to conduct orthopaedic research. Questions regarding the validity and consistency of these studies have not been fully addressed. The purpose of this study was to test for similarity in reported measures between two national databases commonly used for orthopaedic research. A retrospective cohort study of patients undergoing lumbar spinal fusion procedures during 2009 to 2011 was performed in two national databases: the Nationwide Inpatient Sample and the National Surgical Quality Improvement Program. Demographic characteristics, comorbidities, and inpatient adverse events were directly compared between databases. The total numbers of patients included were 144,098 from the Nationwide Inpatient Sample and 8434 from the National Surgical Quality Improvement Program. There were only small differences in demographic characteristics between the two databases. There were large differences between databases in the rates at which specific comorbidities were documented. Non-morbid obesity was documented at rates of 9.33% in the Nationwide Inpatient Sample and 36.93% in the National Surgical Quality Improvement Program (relative risk, 0.25; p < 0.05). Peripheral vascular disease was documented at rates of 2.35% in the Nationwide Inpatient Sample and 0.60% in the National Surgical Quality Improvement Program (relative risk, 3.89; p < 0.05). Similarly, there were large differences between databases in the rates at which specific inpatient adverse events were documented. Sepsis was documented at rates of 0.38% in the Nationwide Inpatient Sample and 0.81% in the National Surgical Quality Improvement Program (relative risk, 0.47; p < 0.05). Acute kidney injury was documented at rates of 1.79% in the Nationwide Inpatient Sample and 0.21% in the National Surgical Quality Improvement Program (relative risk, 8.54; p < 0.05). As database studies become more prevalent in orthopaedic surgery, authors, reviewers, and readers should view these studies with caution. This study shows that two commonly used databases can identify demographically similar patients undergoing a common orthopaedic procedure; however, the databases document markedly different rates of comorbidities and inpatient adverse events. The differences are likely the result of the very different mechanisms through which the databases collect their comorbidity and adverse event data. Findings highlight concerns regarding the validity of orthopaedic database research. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Database of Mechanical Properties of Textile Composites
NASA Technical Reports Server (NTRS)
Delbrey, Jerry
1996-01-01
This report describes the approach followed to develop a database for mechanical properties of textile composites. The data in this database is assembled from NASA Advanced Composites Technology (ACT) programs and from data in the public domain. This database meets the data documentation requirements of MIL-HDBK-17, Section 8.1.2, which describes in detail the type and amount of information needed to completely document composite material properties. The database focuses on mechanical properties of textile composite. Properties are available for a range of parameters such as direction, fiber architecture, materials, environmental condition, and failure mode. The composite materials in the database contain innovative textile architectures such as the braided, woven, and knitted materials evaluated under the NASA ACT programs. In summary, the database contains results for approximately 3500 coupon level tests, for ten different fiber/resin combinations, and seven different textile architectures. It also includes a limited amount of prepreg tape composites data from ACT programs where side-by-side comparisons were made.
ERIC Educational Resources Information Center
Darancik, Yasemin
2016-01-01
It has been observed that data-based translation programs are often used both in and outside the class unconsciously and thus there occurs many problems in foreign language learning and teaching. To draw attention to this problem, with this study, whether the program has satisfactory results or not has been revealed by making translations from…
Extending the Online Public Access Catalog into the Microcomputer Environment.
ERIC Educational Resources Information Center
Sutton, Brett
1990-01-01
Describes PCBIS, a database program for MS-DOS microcomputers that features a utility for automatically converting online public access catalog search results stored as text files into structured database files that can be searched, sorted, edited, and printed. Topics covered include the general features of the program, record structure, record…
Kamali, Parisa; Zettervall, Sara L; Wu, Winona; Ibrahim, Ahmed M S; Medin, Caroline; Rakhorst, Hinne A; Schermerhorn, Marc L; Lee, Bernard T; Lin, Samuel J
2017-04-01
Research derived from large-volume databases plays an increasing role in the development of clinical guidelines and health policy. In breast cancer research, the Surveillance, Epidemiology and End Results, National Surgical Quality Improvement Program, and Nationwide Inpatient Sample databases are widely used. This study aims to compare the trends in immediate breast reconstruction and identify the drawbacks and benefits of each database. Patients with invasive breast cancer and ductal carcinoma in situ were identified from each database (2005-2012). Trends of immediate breast reconstruction over time were evaluated. Patient demographics and comorbidities were compared. Subgroup analysis of immediate breast reconstruction use per race was conducted. Within the three databases, 1.2 million patients were studied. Immediate breast reconstruction in invasive breast cancer patients increased significantly over time in all databases. A similar significant upward trend was seen in ductal carcinoma in situ patients. Significant differences in immediate breast reconstruction rates were seen among races; and the disparity differed among the three databases. Rates of comorbidities were similar among the three databases. There has been a significant increase in immediate breast reconstruction; however, the extent of the reporting of overall immediate breast reconstruction rates and of racial disparities differs significantly among databases. The Nationwide Inpatient Sample and the National Surgical Quality Improvement Program report similar findings, with the Surveillance, Epidemiology and End Results database reporting results significantly lower in several categories. These findings suggest that use of the Surveillance, Epidemiology and End Results database may not be universally generalizable to the entire U.S.
2004-01-01
Abstract A computer program (CalcAnesth) was developed with Visual Basic for the purpose of calculating the doses and prices of injectable medications on the basis of body weight or body surface area. The drug names, concentrations, and prices are loaded from a drug database. This database is a simple text file, that the user can easily create or modify. The animal names and body weights can be loaded from a similar database. After typing the dose and the units into the user interface, the results will be automatically displayed. The program is able to open and save anesthetic protocols, and export or print the results. This CalcAnesth program can be useful in clinical veterinary anesthesiology and research. The rationale for dosing on the basis of body surface area is also discussed in this article. PMID:14979437
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
[A SAS marco program for batch processing of univariate Cox regression analysis for great database].
Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin
2015-02-01
To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.
Comparison of LEWICE and GlennICE in the SLD Regime
NASA Technical Reports Server (NTRS)
Wright, William B.; Potapczuk, Mark G.; Levinson, Laurie H.
2008-01-01
A research project is underway at the NASA Glenn Research Center (GRC) to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from two different computer programs. The first program, LEWICE version 3.2.2, has been reported on previously. The second program is GlennICE version 0.1. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the GRC Icing Research Tunnel (IRT) has also been performed, including additional data taken to extend the database in the Super-cooled Large Drop (SLD) regime. This paper will show the differences in ice shape between LEWICE 3.2.2, GlennICE, and experimental data. This report will also provide a description of both programs. Comparisons are then made to recent additions to the SLD database and selected previous cases. Quantitative comparisons are shown for horn height, horn angle, icing limit, area, and leading edge thickness. The results show that the predicted results for both programs are within the accuracy limits of the experimental data for the majority of cases.
NORTHWEST ENVIRONMENTAL DATABASE (NED) FOR WA, OR, AND ID
This database results from a massive data gathering program initiated by BPA/NPPC in the mid-1980s. Each state now manages the portion of the database within its borders. Data & evaluations were gathered by wildlife/game/fish biologists, and other state, federal, and tribal res...
Murnyak, George R; Spencer, Clark O; Chaney, Ann E; Roberts, Welford C
2002-04-01
During the 1970s, the Army health hazard assessment (HHA) process developed as a medical program to minimize hazards in military materiel during the development process. The HHA Program characterizes health hazards that soldiers and civilians may encounter as they interact with military weapons and equipment. Thus, it is a resource for medical planners and advisors to use that can identify and estimate potential hazards that soldiers may encounter as they train and conduct missions. The U.S. Army Center for Health Promotion and Preventive Medicine administers the program, which is integrated with the Army's Manpower and Personnel Integration program. As the HHA Program has matured, an electronic database has been developed to record and monitor the health hazards associated with military equipment and systems. The current database tracks the results of HHAs and provides reporting designed to assist the HHA Program manager in daily activities.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall
2017-01-01
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
Go Figure: Computer Database Adds the Personal Touch.
ERIC Educational Resources Information Center
Gaffney, Jean; Crawford, Pat
1992-01-01
A database for recordkeeping for a summer reading club was developed for a public library system using an IBM PC and Microsoft Works. Use of the database resulted in more efficient program management, giving librarians more time to spend with patrons and enabling timely awarding of incentives. (LAE)
Overview of national bird population monitoring programs and databases
Gregory S. Butcher; Bruce Peterjohn; C. John Ralph
1993-01-01
A number of programs have been set up to monitor populations of nongame migratory birds. We review these programs and their purposes and provide information on obtaining data or results from these programs. In addition, we review recommendations for improving these programs.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
...; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI) Summary: Under the provisions of... Collection: Title: Clinical Trials Reporting Program (CTRP) Database. Type of Information Collection Request... Program (CTRP) Database, to serve as a single, definitive source of information about all NCI-supported...
Addition of a breeding database in the Genome Database for Rosaceae
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530
Addition of a breeding database in the Genome Database for Rosaceae.
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.
A kinetics database and scripts for PHREEQC
NASA Astrophysics Data System (ADS)
Hu, B.; Zhang, Y.; Teng, Y.; Zhu, C.
2017-12-01
Kinetics of geochemical reactions has been increasingly used in numerical models to simulate coupled flow, mass transport, and chemical reactions. However, the kinetic data are scattered in the literature. To assemble a kinetic dataset for a modeling project is an intimidating task for most. In order to facilitate the application of kinetics in geochemical modeling, we assembled kinetics parameters into a database for the geochemical simulation program, PHREEQC (version 3.0). Kinetics data were collected from the literature. Our database includes kinetic data for over 70 minerals. The rate equations are also programmed into scripts with the Basic language. Using the new kinetic database, we simulated reaction path during the albite dissolution process using various rate equations in the literature. The simulation results with three different rate equations gave difference reaction paths at different time scale. Another application involves a coupled reactive transport model simulating the advancement of an acid plume in an acid mine drainage site associated with Bear Creek Uranium tailings pond. Geochemical reactions including calcite, gypsum, and illite were simulated with PHREEQC using the new kinetic database. The simulation results successfully demonstrated the utility of new kinetic database.
Al-Nasheri, Ahmed; Muhammad, Ghulam; Alsulaiman, Mansour; Ali, Zulfiqar; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H; Bencherif, Mohamed A
2017-01-01
Automatic voice-pathology detection and classification systems may help clinicians to detect the existence of any voice pathologies and the type of pathology from which patients suffer in the early stages. The main aim of this paper is to investigate Multidimensional Voice Program (MDVP) parameters to automatically detect and classify the voice pathologies in multiple databases, and then to find out which parameters performed well in these two processes. Samples of the sustained vowel /a/ of normal and pathological voices were extracted from three different databases, which have three voice pathologies in common. The selected databases in this study represent three distinct languages: (1) the Arabic voice pathology database; (2) the Massachusetts Eye and Ear Infirmary database (English database); and (3) the Saarbruecken Voice Database (German database). A computerized speech lab program was used to extract MDVP parameters as features, and an acoustical analysis was performed. The Fisher discrimination ratio was applied to rank the parameters. A t test was performed to highlight any significant differences in the means of the normal and pathological samples. The experimental results demonstrate a clear difference in the performance of the MDVP parameters using these databases. The highly ranked parameters also differed from one database to another. The best accuracies were obtained by using the three highest ranked MDVP parameters arranged according to the Fisher discrimination ratio: these accuracies were 99.68%, 88.21%, and 72.53% for the Saarbruecken Voice Database, the Massachusetts Eye and Ear Infirmary database, and the Arabic voice pathology database, respectively. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Implementation of a computer database testing and analysis program.
Rouse, Deborah P
2007-01-01
The author is the coordinator of a computer software database testing and analysis program implemented in an associate degree nursing program. Computer software database programs help support the testing development and analysis process. Critical thinking is measurable and promoted with their use. The reader of this article will learn what is involved in procuring and implementing a computer database testing and analysis program in an academic nursing program. The use of the computerized database for testing and analysis will be approached as a method to promote and evaluate the nursing student's critical thinking skills and to prepare the nursing student for the National Council Licensure Examination.
Folks, Russell D; Savir-Baruch, Bital; Garcia, Ernest V; Verdes, Liudmila; Taylor, Andrew T
2012-12-01
Our objective was to design and implement a clinical history database capable of linking to our database of quantitative results from (99m)Tc-mercaptoacetyltriglycine (MAG3) renal scans and export a data summary for physicians or our software decision support system. For database development, we used a commercial program. Additional software was developed in Interactive Data Language. MAG3 studies were processed using an in-house enhancement of a commercial program. The relational database has 3 parts: a list of all renal scans (the RENAL database), a set of patients with quantitative processing results (the Q2 database), and a subset of patients from Q2 containing clinical data manually transcribed from the hospital information system (the CLINICAL database). To test interobserver variability, a second physician transcriber reviewed 50 randomly selected patients in the hospital information system and tabulated 2 clinical data items: hydronephrosis and presence of a current stent. The CLINICAL database was developed in stages and contains 342 fields comprising demographic information, clinical history, and findings from up to 11 radiologic procedures. A scripted algorithm is used to reliably match records present in both Q2 and CLINICAL. An Interactive Data Language program then combines data from the 2 databases into an XML (extensible markup language) file for use by the decision support system. A text file is constructed and saved for review by physicians. RENAL contains 2,222 records, Q2 contains 456 records, and CLINICAL contains 152 records. The interobserver variability testing found a 95% match between the 2 observers for presence or absence of ureteral stent (κ = 0.52), a 75% match for hydronephrosis based on narrative summaries of hospitalizations and clinical visits (κ = 0.41), and a 92% match for hydronephrosis based on the imaging report (κ = 0.84). We have developed a relational database system to integrate the quantitative results of MAG3 image processing with clinical records obtained from the hospital information system. We also have developed a methodology for formatting clinical history for review by physicians and export to a decision support system. We identified several pitfalls, including the fact that important textual information extracted from the hospital information system by knowledgeable transcribers can show substantial interobserver variation, particularly when record retrieval is based on the narrative clinical records.
Construction and validation of a population-based bone densitometry database.
Leslie, William D; Caetano, Patricia A; Macwilliam, Leonard R; Finlayson, Gregory S
2005-01-01
Utilization of dual-energy X-ray absorptiometry (DXA) for the initial diagnostic assessment of osteoporosis and in monitoring treatment has risen dramatically in recent years. Population-based studies of the impact of DXA and osteoporosis remain challenging because of incomplete and fragmented test data that exist in most regions. Our aim was to create and assess completeness of a database of all clinical DXA services and test results for the province of Manitoba, Canada and to present descriptive data resulting from testing. A regionally based bone density program for the province of Manitoba, Canada was established in 1997. Subsequent DXA services were prospectively captured in a program database. This database was retrospectively populated with earlier DXA results dating back to 1990 (the year that the first DXA scanner was installed) by integrating multiple data sources. A random chart audit was performed to assess completeness and accuracy of this dataset. For comparison, testing rates determined from the DXA database were compared with physician administrative claims data. There was a high level of completeness of this database (>99%) and accurate personal identifier information sufficient for linkage with other health care administrative data (>99%). This contrasted with physician billing data that were found to be markedly incomplete. Descriptive data provide a profile of individuals receiving DXA and their test results. In conclusion, the Manitoba bone density database has great potential as a resource for clinical and health policy research because it is population based with a high level of completeness and accuracy.
Concentrations of indoor pollutants (CIP) database user's manual (Version 4. 0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, M.G.; Brown, S.R.; Corradi, C.A.
1990-10-01
This is the latest release of the database and the user manual. The user manual is a tutorial and reference for utilizing the CIP Database system. An installation guide is included to cover various hardware configurations. Numerous examples and explanations of the dialogue between the user and the database program are provided. It is hoped that this resource will, along with on-line help and the menu-driven software, make for a quick and easy learning curve. For the purposes of this manual, it is assumed that the user is acquainted with the goals of the CIP Database, which are: (1) tomore » collect existing measurements of concentrations of indoor air pollutants in a user-oriented database and (2) to provide a repository of references citing measured field results openly accessible to a wide audience of researchers, policy makers, and others interested in the issues of indoor air quality. The database software, as distinct from the data, is contained in two files, CIP. EXE and PFIL.COM. CIP.EXE is made up of a number of programs written in dBase III command code and compiled using Clipper into a single, executable file. PFIL.COM is a program written in Turbo Pascal that handles the output of summary text files and is called from CIP.EXE. Version 4.0 of the CIP Database is current through March 1990.« less
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Database interfaces on NASA's heterogeneous distributed database system
NASA Technical Reports Server (NTRS)
Huang, Shou-Hsuan Stephen
1987-01-01
The purpose of Distributed Access View Integrated Database (DAVID) interface module (Module 9: Resident Primitive Processing Package) is to provide data transfer between local DAVID systems and resident Data Base Management Systems (DBMSs). The result of current research is summarized. A detailed description of the interface module is provided. Several Pascal templates were constructed. The Resident Processor program was also developed. Even though it is designed for the Pascal templates, it can be modified for templates in other languages, such as C, without much difficulty. The Resident Processor itself can be written in any programming language. Since Module 5 routines are not ready yet, there is no way to test the interface module. However, simulation shows that the data base access programs produced by the Resident Processor do work according to the specifications.
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
MIPS: a database for protein sequences, homology data and yeast genome information.
Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F
1997-01-01
The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498
Ragoussi, Maria-Eleni; Costa, Davide
2017-03-14
For the last 30 years, the NEA Thermochemical Database (TDB) Project (www.oecd-nea.org/dbtdb/) has been developing a chemical thermodynamic database for elements relevant to the safety of radioactive waste repositories, providing data that are vital to support the geochemical modeling of such systems. The recommended data are selected on the basis of strict review procedures and are characterized by their consistency. The results of these efforts are freely available, and have become an international point of reference in the field. As a result, a number of important national initiatives with regard to waste management programs have used the NEA TDB as their basis, both in terms of recommended data and guidelines. In this article we describe the fundamentals and achievements of the project together with the characteristics of some databases developed in national nuclear waste disposal programs that have been influenced by the NEA TDB. We also give some insights on how this work could be seen as an approach to be used in broader areas of environmental interest. Copyright © 2017 Elsevier Ltd. All rights reserved.
Space Station Freedom environmental database system (FEDS) for MSFC testing
NASA Technical Reports Server (NTRS)
Story, Gail S.; Williams, Wendy; Chiu, Charles
1991-01-01
The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.
NASA Technical Reports Server (NTRS)
Bohnhoff-Hlavacek, Gail
1992-01-01
One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.
Post-Inpatient Brain Injury Rehabilitation Outcomes: Report from the National OutcomeInfo Database.
Malec, James F; Kean, Jacob
2016-07-15
This study examined outcomes for intensive residential and outpatient/community-based post-inpatient brain injury rehabilitation (PBIR) programs compared with supported living programs. The goal of supported living programs was stable functioning (no change). Data were obtained for a large cohort of adults with acquired brain injury (ABI) from the OutcomeInfo national database, a web-based database system developed through National Institutes of Health (NIH) Small Business Technology Transfer (STTR) funding for monitoring progress and outcomes in PBIR programs primarily with the Mayo-Portland Adaptability Inventory (MPAI-4). Rasch-derived MPAI-4 measures for cases from 2008 to 2014 from 9 provider organizations offering programs in 23 facilities throughout the United States were examined. Controlling for age at injury, time in program, and time since injury on admission (chronicity), both intensive residential (n = 205) and outpatient/community-based (n = 2781) programs resulted in significant (approximately 1 standard deviation [SD]) functional improvement on the MPAI-4 Total Score compared with supported living (n = 101) programs (F = 18.184, p < 0.001). Intensive outpatient/community-based programs showed greater improvements on MPAI-4 Ability (F = 14.135, p < 0.001), Adjustment (F = 12.939, p < 0.001), and Participation (F = 16.679, p < 0.001) indices than supported living programs; whereas, intensive residential programs showed improvement primarily in Adjustment and Participation. Age at injury and time in program had small effects on outcome; the effect of chronicity was small to moderate. Examination of more chronic cases (>1 year post-injury) showed significant, but smaller (approximately 0.5 SD) change on the MPAI-4 relative to supported living programs (F = 17.562, p < 0.001). Results indicate that intensive residential and outpatient/community-based PIBR programs result in substantial positive functional changes moderated by chronicity.
Post-Inpatient Brain Injury Rehabilitation Outcomes: Report from the National OutcomeInfo Database
Kean, Jacob
2016-01-01
Abstract This study examined outcomes for intensive residential and outpatient/community-based post-inpatient brain injury rehabilitation (PBIR) programs compared with supported living programs. The goal of supported living programs was stable functioning (no change). Data were obtained for a large cohort of adults with acquired brain injury (ABI) from the OutcomeInfo national database, a web-based database system developed through National Institutes of Health (NIH) Small Business Technology Transfer (STTR) funding for monitoring progress and outcomes in PBIR programs primarily with the Mayo-Portland Adaptability Inventory (MPAI-4). Rasch-derived MPAI-4 measures for cases from 2008 to 2014 from 9 provider organizations offering programs in 23 facilities throughout the United States were examined. Controlling for age at injury, time in program, and time since injury on admission (chronicity), both intensive residential (n = 205) and outpatient/community-based (n = 2781) programs resulted in significant (approximately 1 standard deviation [SD]) functional improvement on the MPAI-4 Total Score compared with supported living (n = 101) programs (F = 18.184, p < 0.001). Intensive outpatient/community-based programs showed greater improvements on MPAI-4 Ability (F = 14.135, p < 0.001), Adjustment (F = 12.939, p < 0.001), and Participation (F = 16.679, p < 0.001) indices than supported living programs; whereas, intensive residential programs showed improvement primarily in Adjustment and Participation. Age at injury and time in program had small effects on outcome; the effect of chronicity was small to moderate. Examination of more chronic cases (>1 year post-injury) showed significant, but smaller (approximately 0.5 SD) change on the MPAI-4 relative to supported living programs (F = 17.562, p < 0.001). Results indicate that intensive residential and outpatient/community-based PIBR programs result in substantial positive functional changes moderated by chronicity. PMID:26414433
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poliakov, Alexander; Couronne, Olivier
2002-11-04
Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less
Finding Protein and Nucleotide Similarities with FASTA
Pearson, William R.
2016-01-01
The FASTA programs provide a comprehensive set of rapid similarity searching tools ( fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local and global similarity searches ( ssearch36, ggsearch36) and for searching with short peptides and oligonucleotides ( fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity (Unit 3.5). The FASTA programs can produce “BLAST-like” alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases (Unit 9.4). The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. PMID:27010337
Finding Protein and Nucleotide Similarities with FASTA.
Pearson, William R
2016-03-24
The FASTA programs provide a comprehensive set of rapid similarity searching tools (fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local, and global similarity searches (ssearch36, ggsearch36), and for searching with short peptides and oligonucleotides (fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity. The FASTA programs can produce "BLAST-like" alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases. The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. Copyright © 2016 John Wiley & Sons, Inc.
Legacy: Scientific results ODP Legacy: Engineering and science operations ODP Legacy: Samples & ; databases ODP Legacy: Outreach Overview Program Administration | Scientific Results | Engineering &
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
MICA: desktop software for comprehensive searching of DNA databases
Stokes, William A; Glick, Benjamin S
2006-01-01
Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays) that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software. PMID:17018144
EPAs ToxCast Program: From Research to Application
A New Paradigm for Toxicity Testing in the 21st Century. In FY 2009, EPA published the toxicity reference database ToxRefDB, which contains results of over 30 years and $2B worth of animal studies for over 400 chemicals. This database is available on EPA’s website, and increases...
Fourment, Mathieu; Gibbs, Mark J
2008-01-01
Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically. PMID:18251994
A Statewide Information Databases Program: What Difference Does It Make to Academic Libraries?
ERIC Educational Resources Information Center
Lester, June; Wallace, Danny P.
2004-01-01
The Oklahoma Department of Libraries (ODL) launched Oklahoma's statewide database program in 1997. For the state's academic libraries, the program extended access to information, increased database use, and fostered positive relationships among ODL, academic libraries, and Oklahoma State Regents for Higher Education (OSRHE), creating a more…
A "User-Friendly" Program for Vapor-Liquid Equilibrium.
ERIC Educational Resources Information Center
Da Silva, Francisco A.; And Others
1991-01-01
Described is a computer software package suitable for teaching and research in the area of multicomponent vapor-liquid equilibrium. This program, which has a complete database, can accomplish phase-equilibrium calculations using various models and graph the results. (KR)
SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters
Wang, Chunlin; Lefkowitz, Elliot J
2004-01-01
Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296
An Improved Database System for Program Assessment
ERIC Educational Resources Information Center
Haga, Wayne; Morris, Gerard; Morrell, Joseph S.
2011-01-01
This research paper presents a database management system for tracking course assessment data and reporting related outcomes for program assessment. It improves on a database system previously presented by the authors and in use for two years. The database system presented is specific to assessment for ABET (Accreditation Board for Engineering and…
NASA Astrophysics Data System (ADS)
Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.
2017-12-01
Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.
Knowledge discovery from structured mammography reports using inductive logic programming.
Burnside, Elizabeth S; Davis, Jesse; Costa, Victor Santos; Dutra, Inês de Castro; Kahn, Charles E; Fine, Jason; Page, David
2005-01-01
The development of large mammography databases provides an opportunity for knowledge discovery and data mining techniques to recognize patterns not previously appreciated. Using a database from a breast imaging practice containing patient risk factors, imaging findings, and biopsy results, we tested whether inductive logic programming (ILP) could discover interesting hypotheses that could subsequently be tested and validated. The ILP algorithm discovered two hypotheses from the data that were 1) judged as interesting by a subspecialty trained mammographer and 2) validated by analysis of the data itself.
FY11 Facility Assessment Study for Aeronautics Test Program
NASA Technical Reports Server (NTRS)
Loboda, John A.; Sydnor, George H.
2013-01-01
This paper presents the approach and results for the Aeronautics Test Program (ATP) FY11 Facility Assessment Project. ATP commissioned assessments in FY07 and FY11 to aid in the understanding of the current condition and reliability of its facilities and their ability to meet current and future (five year horizon) test requirements. The principle output of the assessment was a database of facility unique, prioritized investments projects with budgetary cost estimates. This database was also used to identify trends for the condition of facility systems.
NASA Astrophysics Data System (ADS)
Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.
2006-12-01
The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and partner agencies to efficiently identify the significant results for new experiment directions and principle investigators to formulate experiment directions for new proposals.
Structure elucidation of organic compounds aided by the computer program system SCANNET
NASA Astrophysics Data System (ADS)
Guzowska-Swider, B.; Hippe, Z. S.
1992-12-01
Recognition of chemical structure is a very important problem currently solved by molecular spectroscopy, particularly IR, UV, NMR and Raman spectroscopy, and mass spectrometry. Nowadays, solution of the problem is frequently aided by the computer. SCANNET is a computer program system for structure elucidation of organic compounds, developed by our group. The structure recognition of an unknown substance is made by comparing its spectrum with successive reference spectra of standard compounds, i.e. chemical compounds of known chemical structure, stored in a spectral database. The computer program system SCANNET consists of six different spectral databases for following the analytical methods: IR, UV, 13C-NMR, 1H-NMR and Raman spectroscopy, and mass spectrometry. A chemist, to elucidate a structure, can use one of these spectral methods or a combination of them and search the appropriate databases. As the result of searching each spectral database, the user obtains a list of chemical substances whose spectra are identical and/or similar to the spectrum input into the computer. The final information obtained from searching the spectral databases is in the form of a list of chemical substances having all the examined spectra, for each type of spectroscopy, identical or simlar to those of the unknown compound.
Yayac, Michael; Javandal, Mitra; Mulcahey, Mary K
2017-01-01
A substantial number of orthopaedic surgeons apply for sports medicine fellowships after residency completion. The Internet is one of the most important resources applicants use to obtain information about fellowship programs, with the program website serving as one of the most influential sources. The American Orthopaedic Society for Sports Medicine (AOSSM), San Francisco Match (SFM), and Arthroscopy Association of North America (AANA) maintain databases of orthopaedic sports medicine fellowship programs. A 2013 study evaluated the content and accessibility of the websites for accredited orthopaedic sports medicine fellowships. To reassess these websites based on the same parameters and compare the results with those of the study published in 2013 to determine whether any improvement has been made in fellowship website content or accessibility. Cross-sectional study. We reviewed all existing websites for the 95 accredited orthopaedic sports medicine fellowships included in the AOSSM, SFM, and AANA databases. Accessibility of the websites was determined by performing a Google search for each program. A total of 89 sports fellowship websites were evaluated for overall content. Websites for the remaining 6 programs could not be identified, so they were not included in content assessment. Of the 95 accredited sports medicine fellowships, 49 (52%) provided links in the AOSSM database, 89 (94%) in the SFM database, and 24 (25%) in the AANA database. Of the 89 websites, 89 (100%) provided a description of the program, 62 (70%) provided selection process information, and 40 (45%) provided a link to the SFM website. Two searches through Google were able to identify links to 88% and 92% of all accredited programs. The majority of accredited orthopaedic sports medicine fellowship programs fail to utilize the Internet to its full potential as a resource to provide applicants with detailed information about the program, which could help residents in the selection and ranking process. Orthopaedic sports medicine fellowship websites that are easily accessible through the AOSSM, SFM, AANA, or Google and that provide all relevant information for applicants would simplify the process of deciding where to apply, interview, and ultimately how to rank orthopaedic sports medicine fellowship programs for the Orthopaedic Sports Medicine Fellowship Match.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-24
... Student Database AGENCY: Office of Elementary and Secondary Education, Department of Education. ACTION... entitled ``Migrant Education Bypass Program Student Database (MEBPSD)'' (18-14-06). The Secretary has...
Burnham, J F; Shearer, B S; Wall, J C
1992-01-01
Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424
Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1
2013-05-14
include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric
The Development of a Korean Drug Dosing Database
Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon
2011-01-01
Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gold, Lois Swirsky; Manley, Neela B.; Slone, Thomas H.
2005-04-08
The Carcinogenic Potency Database (CPDB) is a systematic and unifying resource that standardizes the results of chronic, long-term animal cancer tests which have been conducted since the 1950s. The analyses include sufficient information on each experiment to permit research into many areas of carcinogenesis. Both qualitative and quantitative information is reported on positive and negative experiments that meet a set of inclusion criteria. A measure of carcinogenic potency, TD50 (daily dose rate in mg/kg body weight/day to induce tumors in half of test animals that would have remained tumor-free at zero dose), is estimated for each tissue-tumor combination reported. Thismore » article is the ninth publication of a chronological plot of the CPDB; it presents results on 560 experiments of 188 chemicals in mice, rats, and hamsters from 185 publications in the general literature updated through 1997, and from 15 Reports of the National Toxicology Program in 1997-1998. The test agents cover a wide variety of uses and chemical classes. The CPDB Web Site(http://potency.berkeley.edu/) presents the combined database of all published plots in a variety of formats as well as summary tables by chemical and by target organ, supplemental materials on dosing and survival, a detailed guide to using the plot formats, and documentation of methods and publications. The overall CPDB, including the results in this article, presents easily accessible results of 6153 experiments on 1485 chemicals from 1426 papers and 429 NCI/NTP (National Cancer Institute/National Toxicology program) Technical Reports. A tab-separated format of the full CPDB for reading the data into spreadsheets or database applications is available on the Web Site.« less
ERIC Educational Resources Information Center
Rosenberg, Michael S.; Boyer, K. Lynn; Sindelar, Paul T.; Misra, Sunil K.
2007-01-01
This study describes special education alternative route (AR) teacher preparation programs. The authors developed a national database of programs and collected information on program sponsorship, length and intensity, features, and participant demographics. Most of the 235 programs in the database were in states that had significant shortages of…
BIOPEP database and other programs for processing bioactive peptide sequences.
Minkiewicz, Piotr; Dziuba, Jerzy; Iwaniak, Anna; Dziuba, Marta; Darewicz, Małgorzata
2008-01-01
This review presents the potential for application of computational tools in peptide science based on a sample BIOPEP database and program as well as other programs and databases available via the World Wide Web. The BIOPEP application contains a database of biologically active peptide sequences and a program enabling construction of profiles of the potential biological activity of protein fragments, calculation of quantitative descriptors as measures of the value of proteins as potential precursors of bioactive peptides, and prediction of bonds susceptible to hydrolysis by endopeptidases in a protein chain. Other bioactive and allergenic peptide sequence databases are also presented. Programs enabling the construction of binary and multiple alignments between peptide sequences, the construction of sequence motifs attributed to a given type of bioactivity, searching for potential precursors of bioactive peptides, and the prediction of sites susceptible to proteolytic cleavage in protein chains are available via the Internet as are other approaches concerning secondary structure prediction and calculation of physicochemical features based on amino acid sequence. Programs for prediction of allergenic and toxic properties have also been developed. This review explores the possibilities of cooperation between various programs.
Gold, L S; Slone, T H; Backman, G M; Magaw, R; Da Costa, M; Lopipero, P; Blumenthal, M; Ames, B N
1987-01-01
This paper is the second chronological supplement to the Carcinogenic Potency Database, published earlier in this journal (1,2,4). We report here results of carcinogenesis bioassays published in the general literature between January 1983 and December 1984, and in Technical Reports of the National Cancer Institute/National Toxicology Program between January 1983 and May 1986. This supplement includes results of 525 long-term, chronic experiments of 199 test compounds, and reports the same information about each experiment in the same plot format as the earlier papers: e.g., the species and strain of test animal, the route and duration of compound administration, dose level and other aspects of experimental protocol, histopathology and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, author's opinion about carcinogenicity, and literature citation. We refer the reader to the 1984 publications for a description of the numerical index of carcinogenic potency (TD50), a guide to the plot of the database, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The three plots of the database are to be used together, since results of experiments published in earlier plots are not repeated. Taken together, the three plots include results for more than 3500 experiments on 975 chemicals. Appendix 14 is an index to all chemicals in the database and indicates which plot(s) each chemical appears in. PMID:3691431
Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane
2016-01-01
In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.
NIST Gas Hydrate Research Database and Web Dissemination Channel.
Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D
2010-01-01
To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.
Shuttle-Data-Tape XML Translator
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.
Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.
2011-01-01
Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479
SACD's Support of the Hyper-X Program
NASA Technical Reports Server (NTRS)
Robinson, Jeffrey S.; Martin, John G.
2006-01-01
NASA s highly successful Hyper-X program demonstrated numerous hypersonic air-breathing vehicle related technologies including scramjet performance, advanced materials and hot structures, GN&C, and integrated vehicle performance resulting in, for the first time ever, acceleration of a vehicle powered by a scramjet engine. The Systems Analysis and Concepts Directorate (SACD) at NASA s Langley Research Center played a major role in the integrated team providing critical support, analysis, and leadership to the Hyper-X Program throughout the program s entire life and were key to its ultimate success. Engineers in SACD s Vehicle Analysis Branch (VAB) were involved in all stages and aspects of the program, from conceptual design prior to contract award, through preliminary design and hardware development, and in to, during, and after each of the three flights. Working closely with other engineers at Langley and Dryden, as well as industry partners, roughly 20 members of SACD were involved throughout the evolution of the Hyper-X program in nearly all disciplines, including lead roles in several areas. Engineers from VAB led the aerodynamic database development, the propulsion database development, and the stage separation analysis and database development effort. Others played major roles in structures, aerothermal, GN&C, trajectory analysis and flight simulation, as well as providing CFD support for aerodynamic, propulsion, and aerothermal analysis.
The Russian effort in establishing large atomic and molecular databases
NASA Astrophysics Data System (ADS)
Presnyakov, Leonid P.
1998-07-01
The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.
Fourment, Mathieu; Gibbs, Mark J
2008-02-05
Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.
Chaplin, Beth; Meloni, Seema; Eisen, Geoffrey; Jolayemi, Toyin; Banigbe, Bolanle; Adeola, Juliette; Wen, Craig; Reyes Nieva, Harry; Chang, Charlotte; Okonkwo, Prosper; Kanki, Phyllis
2015-01-01
The implementation of PEPFAR programs in resource-limited settings was accompanied by the need to document patient care on a scale unprecedented in environments where paper-based records were the norm. We describe the development of an electronic medical records system (EMRS) put in place at the beginning of a large HIV/AIDS care and treatment program in Nigeria. Databases were created to record laboratory results, medications prescribed and dispensed, and clinical assessments, using a relational database program. A collection of stand-alone files recorded different elements of patient care, linked together by utilities that aggregated data on national standard indicators and assessed patient care for quality improvement, tracked patients requiring follow-up, generated counts of ART regimens dispensed, and provided 'snapshots' of a patient's response to treatment. A secure server was used to store patient files for backup and transfer. By February 2012, when the program transitioned to local in-country management by APIN, the EMRS was used in 33 hospitals across the country, with 4,947,433 adult, pediatric and PMTCT records that had been created and continued to be available for use in patient care. Ongoing trainings for data managers, along with an iterative process of implementing changes to the databases and forms based on user feedback, were needed. As the program scaled up and the volume of laboratory tests increased, results were produced in a digital format, wherever possible, that could be automatically transferred to the EMRS. Many larger clinics began to link some or all of the databases to local area networks, making them available to a larger group of staff members, or providing the ability to enter information simultaneously where needed. The EMRS improved patient care, enabled efficient reporting to the Government of Nigeria and to U.S. funding agencies, and allowed program managers and staff to conduct quality control audits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
TCW: Transcriptome Computational Workbench
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R.
2013-01-01
Background The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. Methodology The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. Conclusion It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw. PMID:23874959
TCW: transcriptome computational workbench.
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R
2013-01-01
The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw.
The purpose of this SOP is to describe the database storage organization, and to describe the sources of data for each database used during the Arizona NHEXAS project and the Border study. Keywords: data; database; organization.
The U.S.-Mexico Border Program is sponsored by t...
NBIC: National Ballast Information Clearinghouse
Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database / Database Manager: Tami Huber Senior Analyst / Ecologist: Mark Minton Data Managers Ashley Arnwine Jessica Hardee Amanda Reynolds Database Design and Programming / Application Programming: Paul Winterbauer
TabSQL: a MySQL tool to facilitate mapping user data to public databases
2010-01-01
Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251
Gold, L S; Slone, T H; Backman, G M; Eisenberg, S; Da Costa, M; Wong, M; Manley, N B; Rohrbach, L; Ames, B N
1990-01-01
This paper is the third chronological supplement to the Carcinogenic Potency Database that first appeared in this journal in 1984. We report here results of carcinogenesis bioassays published in the general literature between January 1985 and December 1986, and in Technical Reports of the National Toxicology Program between June 1986 and June 1987. This supplement includes results of 337 long-term, chronic experiments of 121 compounds, and reports the same information about each experiment in the same plot format as the earlier papers, e.g., the species and strain of animal, the route and duration of compound administration, dose level, and other aspects of experimental protocol, histopathology, and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, opinion of the author about carcinogenicity, and literature citation. The reader needs to refer to the 1984 publication for a guide to the plot of the database, a complete description of the numerical index of carcinogenic potency, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The four plots of the database are to be used together as results published earlier are not repeated. In all, the four plots include results for approximately 4000 experiments on 1050 chemicals. Appendix 14 of this paper is an alphabetical index to all chemicals in the database and indicates which plot(s) each chemical appears in. A combined plot of all results from the four separate papers, that is ordered alphabetically by chemical, is available from the first author, in printed form or on computer tape or diskette. PMID:2351123
Development of a North American paleoclimate pollen-based reconstruction database application
NASA Astrophysics Data System (ADS)
Ladd, Matthew; Mosher, Steven; Viau, Andre
2013-04-01
Recent efforts in synthesizing paleoclimate records across the globe has warranted an effort to standardize the different paleoclimate archives currently available in order to facilitate data-model comparisons and hence improve our estimates of future climate change. It is often the case that the methodology and programs make it challenging for other researchers to reproduce the results for a reconstruction, therefore there is a need for to standardize paleoclimate reconstruction databases in an application specific to proxy data. Here we present a methodology using the open source R language using North American pollen databases (e.g. NAPD, NEOTOMA) where this application can easily be used to perform new reconstructions and quickly analyze and output/plot the data. The application was developed to easily test methodological and spatial/temporal issues that might affect the reconstruction results. The application allows users to spend more time analyzing and interpreting results instead of on data management and processing. Some of the unique features of this R program are the two modules each with a menu making the user feel at ease with the program, the ability to use different pollen sums, select one of 70 climate variables available, substitute an appropriate modern climate dataset, a user-friendly regional target domain, temporal resolution criteria, linear interpolation and many other features for a thorough exploratory data analysis. The application program will be available for North American pollen-based reconstructions and eventually be made available as a package through the CRAN repository by late 2013.
Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.
Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R
2014-12-01
Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.
DOE technology information management system database study report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widing, M.A.; Blodgett, D.W.; Braun, M.D.
1994-11-01
To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less
Just-in-time Database-Driven Web Applications
2003-01-01
"Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109
DOT National Transportation Integrated Search
2001-12-01
The purpose of this research project was to provide a systematic evaluation of the performance of Florida's commuter assistance programs from two perspectives: Impact on the commuting patterns and awareness of the general public; and Impact on the co...
Correlated Attack Modeling (CAM)
2003-10-01
describing attack models to a scenario recognition engine, a prototype of such an engine was developed, using components of the EMERALD intrusion...content. Results – The attacker gains information enabling remote access to database (i.e., privileged login information, database layout to allow...engine that uses attack specifications written in CAML. The implementation integrates two advanced technologies devel- oped in the EMERALD program [27, 31
Scientific information repository assisting reflectance spectrometry in legal medicine.
Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W
2012-06-01
Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.
Data Mining the Ogle-II I-band Database for Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Ciocca, M.
2013-08-01
The OGLE I-band database is a searchable database of quality photometric data available to the public. During Phase 2 of the experiment, known as "OGLE-II", I-band observations were made over a period of approximately 1,000 days, resulting in over 1010 measurements of more than 40 million stars. This was accomplished by using a filter with a passband near the standard Cousins Ic. The database of these observations is fully searchable using the mysql database engine, and provides the magnitude measurements and their uncertainties. In this work, a program of data mining the OGLE I-band database was performed, resulting in the discovery of 42 previously unreported eclipsing binaries. Using the software package Peranso (Vanmuster 2011) to analyze the light curves obtained from OGLE-II, the eclipsing types, the epochs and the periods of these eclipsing variables were determined, to one part in 106. A preliminary attempt to model the physical parameters of these binaries was also performed, using the Binary Maker 3 software (Bradstreet and Steelman 2004).
The Perfect Marriage: Integrated Word Processing and Data Base Management Programs.
ERIC Educational Resources Information Center
Pogrow, Stanley
1983-01-01
Discussion of database integration and how it operates includes recommendations on compatible brand name word processing and database management programs, and a checklist for evaluating essential and desirable features of the available programs. (MBR)
SUPERSITES INTEGRATED RELATIONAL DATABASE (SIRD)
As part of EPA's Particulate Matter (PM) Supersites Program (Program), the University of Maryland designed and developed the Supersites Integrated Relational Database (SIRD). Measurement data in SIRD include comprehensive air quality data from the 7 Supersite program locations f...
Okuma, E
1994-01-01
With the introduction of the Cumulative Index to Nursing and Allied Health Literature (CINAHL) on CD-ROM, research was initiated to compare coverage of nursing journals by CINAHL and MEDLINE in this format, expanding on previous comparison of these databases in print and online. The study assessed search results for eight topics in 1989 and 1990 citations in both databases, each produced by SilverPlatter. Results were tallied and analyzed for number of records retrieved, unique and overlapping records, relevance, and appropriateness. An overall precision score was developed. The goal of the research was to develop quantifiable tools to help determine which database to purchase for an academic library serving an undergraduate nursing program.
Okuma, E
1994-01-01
With the introduction of the Cumulative Index to Nursing and Allied Health Literature (CINAHL) on CD-ROM, research was initiated to compare coverage of nursing journals by CINAHL and MEDLINE in this format, expanding on previous comparison of these databases in print and online. The study assessed search results for eight topics in 1989 and 1990 citations in both databases, each produced by SilverPlatter. Results were tallied and analyzed for number of records retrieved, unique and overlapping records, relevance, and appropriateness. An overall precision score was developed. The goal of the research was to develop quantifiable tools to help determine which database to purchase for an academic library serving an undergraduate nursing program. PMID:8136757
Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi
2012-01-01
Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606
ERIC Educational Resources Information Center
Hammonds, S. J.
1990-01-01
A technique for the numerical identification of bacteria using normalized likelihoods calculated from a probabilistic database is described, and the principles of the technique are explained. The listing of the computer program is included. Specimen results from the program, and examples of how they should be interpreted, are given. (KR)
ERIC Educational Resources Information Center
Costa, Joana M.; Miranda, Guilhermina L.
2017-01-01
This paper presents the results of a systematic review of the literature, including a meta-analysis, about the effectiveness of the use of Alice software in programming learning when compared to the use of a conventional programming language. Our research included studies published between the years 2000 and 2014 in the main databases. We gathered…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Hoon Shin; Young Wook Lee; Young Ho Cho
2006-07-01
In the nuclear energy field, there are so many difficult things that even people who are working in this field are not much familiar with, such as, Dose evaluation, Dose management, etc. Thus, so many efforts have been done to achieve the knowledge and data for understanding. Although some data had been achieved, the applications of these data to necessary cases were more difficult job. Moreover, the type of Dose evaluation program until now was 'Console type' which is not easy enough to use for the beginners. To overcome the above causes of difficulties, the window-based integrated program and databasemore » management were developed in our research lab. The program, called as INSREC, consists of four sub-programs as follow; INSREC-NOM, INSREC-ACT, INSREC-MED, and INSREC-EXI. In ICONE 11 conference, INSREC-program(ICONE-36203) which can evaluates on/off-site dose of nuclear power plant in normal operation was introduced. Upgraded INSREC-program which will be presented in ICONE 14 conference has three additional codes comparing with pre-presented INSREC-program. Those subprograms can evaluate on/off-site Dose of nuclear power plant in accident cases. And they also have the functions of 'Dose evaluation and management' in the hospital and provide the 'Expert system' based on knowledge related to nuclear energy/radiation field. The INSREC-NOM, one of subprograms, is composed of 'Source term evaluation program', 'Atmospheric diffusion factor evaluation program', 'Off-site dose evaluation program', and 'On-site database program'. The INSREC-ACT is composed of 'On/Off-site dose evaluation program' and 'Result analysis program' and the INSREC-MED is composed of 'Workers/patients dose database program' and 'Dose evaluation program for treatment room'. The final one, INSREC-EXI, is composed of 'Database searching program based on artificial intelligence', 'Instruction program,' and 'FAQ/Q and A boards'. Each program was developed by using of Visual C++, Microsoft Access mainly. To verify the reliability, some suitable programs were selected such as AZAP and Stardose programs for the comparison. The AZAP program was selected for the on/off-site dose evaluation during the normal operation of nuclear reactor and Stardose program was used for the on/off-site dose evaluation in accident. The MCNP code was used for the dose evaluation and management in the hospital. Each comparison result was acceptable in errors analysis. According to the reliable verification results, it was concluded that INSREC program had an acceptable reliability for dose calculation and could give many proper dada for the sites. To serve the INSREC to people, the proper server system was constructed. We gave chances for the people (user) to utilize the INSREC through network connected to server system. The reactions were pretty much good enough to be satisfied. For the future work, many efforts will be given to improve the better user-interface and more necessary data will be provided to more people through database supplement and management. (authors)« less
76 FR 77504 - Notice of Submission for OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... of Review: Extension. Title of Collection: Charter Schools Program Grand Award Database. OMB Control... collect data necessary for the Charter Schools Program (CSP) Grant Award Database. The CSP is authorized... award information from grantees (State agencies and some schools) for a database of current CSP-funded...
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
Basner, Jodi E; Theisz, Katrina I; Jensen, Unni S; Jones, C David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G; Franca-Koh, Jonathan; Hanlon, Sean E; Kuhn, Nastaran Z; Nagahara, Larry A; Schnell, Joshua D; Moore, Nicole M
2013-12-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences-Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program.
Discovering Knowledge from Noisy Databases Using Genetic Programming.
ERIC Educational Resources Information Center
Wong, Man Leung; Leung, Kwong Sak; Cheng, Jack C. Y.
2000-01-01
Presents a framework that combines Genetic Programming and Inductive Logic Programming, two approaches in data mining, to induce knowledge from noisy databases. The framework is based on a formalism of logic grammars and is implemented as a data mining system called LOGENPRO (Logic Grammar-based Genetic Programming System). (Contains 34…
Developing a Systematic Patent Search Training Program
ERIC Educational Resources Information Center
Zhang, Li
2009-01-01
This study aims to develop a systematic patent training program using patent analysis and citation analysis techniques applied to patents held by the University of Saskatchewan. The results indicate that the target audience will be researchers in life sciences, and aggregated patent database searching and advanced search techniques should be…
The Steward Observatory asteroid relational database
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1991-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
A Relational Algebra Query Language for Programming Relational Databases
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole
2011-01-01
In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…
ERIC Educational Resources Information Center
Myint-U, Athi; O'Donnell, Lydia; Phillips, Dawna
2012-01-01
This technical brief describes updates to a database of dropout prevention programs and policies in 2006/07 created by the Regional Education Laboratory (REL) Northeast and Islands and described in the Issues & Answers report, "Piloting a searchable database of dropout prevention programs in nine low-income urban school districts in the…
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T.
2016-01-01
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan–Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. PMID:26507857
NASA Astrophysics Data System (ADS)
Kong, Xiang-Zhao; Tutolo, Benjamin M.; Saar, Martin O.
2013-02-01
SUPCRT92 is a widely used software package for calculating the standard thermodynamic properties of minerals, gases, aqueous species, and reactions. However, it is labor-intensive and error-prone to use it directly to produce databases for geochemical modeling programs such as EQ3/6, the Geochemist's Workbench, and TOUGHREACT. DBCreate is a SUPCRT92-based software program written in FORTRAN90/95 and was developed in order to produce the required databases for these programs in a rapid and convenient way. This paper describes the overall structure of the program and provides detailed usage instructions.
Enabling On-Demand Database Computing with MIT SuperCloud Database Management System
2015-09-15
arc.liv.ac.uk/trac/SGE) provides these services and is independent of programming language (C, Fortran, Java , Matlab, etc) or parallel programming...a MySQL database to store DNS records. The DNS records are controlled via a simple web service interface that allows records to be created
A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.
ERIC Educational Resources Information Center
Breeding, Marshall
2000-01-01
Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)
Martin, Jennifer; Worede, Leah; Islam, Sameer
2016-01-01
Objective. To conduct a systematic review of reports of pharmacy student research programs that describes the programs and resulting publications or presentations. Methods. To be eligible for the review, reports had to be in English and indicate that students were required to collect, analyze data, and report or present findings. The outcome variables were extramural posters/presentations and publications. Results. Database searches resulted in identification of 13 reports for 12 programs. Two-thirds were reports of projects required for a course or for graduation, and the remaining third were elective (participation was optional). Extramural posters resulted from 75% of the programs and publications from 67%. Conclusion. Although reporting on the outcomes of student research programs is limited, three-quarters of the programs indicated that extramural presentations, publications, or both resulted from student research. Additional research is needed to identify relevant outcomes of student research programs in pharmacy. PMID:27667837
Slack, Marion K; Martin, Jennifer; Worede, Leah; Islam, Sameer
2016-08-25
Objective. To conduct a systematic review of reports of pharmacy student research programs that describes the programs and resulting publications or presentations. Methods. To be eligible for the review, reports had to be in English and indicate that students were required to collect, analyze data, and report or present findings. The outcome variables were extramural posters/presentations and publications. Results. Database searches resulted in identification of 13 reports for 12 programs. Two-thirds were reports of projects required for a course or for graduation, and the remaining third were elective (participation was optional). Extramural posters resulted from 75% of the programs and publications from 67%. Conclusion. Although reporting on the outcomes of student research programs is limited, three-quarters of the programs indicated that extramural presentations, publications, or both resulted from student research. Additional research is needed to identify relevant outcomes of student research programs in pharmacy.
Network Configuration of Oracle and Database Programming Using SQL
NASA Technical Reports Server (NTRS)
Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.
2000-01-01
A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.
1991-01-01
Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reporting requirements of various transplant registries, 2) for reporting to an increasing number of government agencies and insurance carriers, 3) to obtain updates of our operative experience at regular intervals, 4) to integrate the Histocompatibility and Immunogenetics Laboratory (HLA) for online test result reporting, and 5) to facilitate clinical investigation. PMID:1807741
Morgan, Perri; Humeniuk, Katherine M; Everett, Christine M
2015-09-01
As physician assistant (PA) roles expand and diversify in the United States and around the world, there is a pressing need for research that illuminates how PAs may best be selected, educated, and used in health systems to maximize their potential contributions to health. Physician assistant education programs are well positioned to advance this research by collecting and organizing data on applicants, students, and graduates. Our PA program is creating a permanent longitudinal education database for research that contains extensive student-level data. This database will allow us to conduct research on all phases of PA education, from admission processes through the professional practice of our graduates. In this article, we describe our approach to constructing a longitudinal student-level research database and discuss the strengths and limitations of longitudinal databases for research on education and the practice of PAs. We hope to encourage other PA programs to initiate similar projects so that, in the future, data can be combined for use in multi-institutional research that can contribute to improved education for PA students across programs.
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.
1993-01-01
Foreign competitors are challenging the world leadership of the U.S. aerospace industry, and increasingly tight budgets everywhere make international cooperation in aerospace science necessary. The NASA STI Program has as part of its mission to support NASA R&D, and to that end has developed a knowledge base of aerospace-related information known as the NASA Aerospace Database. The NASA STI Program is already involved in international cooperation with NATO/AGARD/TIP, CENDI, ICSU/ICSTI, and the U.S. Japan Committee on STI. With the new more open political climate, the perceived dearth of foreign information in the NASA Aerospace Database, and the development of the ESA database and DELURA, the German databases, the NASA STI Program is responding by sponsoring workshops on foreign acquisitions and by increasing its cooperation with international partners and with other U.S. agencies. The STI Program looks to the future of improved database access through networking and a GUI; new media; optical disk, video, and full text; and a Technology Focus Group that will keep the NASA STI Program current with technology.
ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.
Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim
2010-03-01
Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org
Which Fecal Immunochemical Test Should I Choose?
Daly, Jeanette M.; Xu, Yinghui; Levy, Barcey T.
2017-01-01
Objectives: To summarize the fecal immunochemical tests (FITs) available in the United States, the 2014 pathology proficiency testing (PT) program FIT results, and the literature related to the test characteristics of FITs available in the United States to detect advanced adenomatous polyps (AAP) and/or colorectal cancer (CRC). Methods: Detailed review of the Food and Drug Administration’s Clinical Laboratory Improvement Amendments (CLIA) database of fecal occult blood tests, the 2014 FIT PT program results, and the literature related to FIT accuracy. Results: A search of the CLIA database identified 65 FITs, with 26 FITs available for purchase in the United States. Thirteen of these FITs were evaluated on a regular basis by PT programs, with an overall sensitivity of 99.1% and specificity of 99.2% for samples spiked with hemoglobin. Automated FITs had better sensitivity and specificity than CLIA-waived FITs for detection of AAP and CRC in human studies using colonoscopy as the gold standard. Conclusion: Although many FITs are available in the United States, few have been tested in proficiency testing programs. Even fewer have data in humans on sensitivity and specificity for AAP or CRC. Our review indicates that automated FITs have the best test characteristics for AAP and CRC. PMID:28447866
Rural Water Quality Database: Educational Program to Collect Information.
ERIC Educational Resources Information Center
Lemley, Ann; Wagenet, Linda
1993-01-01
A New York State project created a water quality database for private drinking water supplies, using the statewide educational program to collect the data. Another goal was to develop this program so rural residents could increase their knowledge of water supply management. (Author)
The establishment of the atmospheric emission inventories of the ESCOMPTE program
NASA Astrophysics Data System (ADS)
François, S.; Grondin, E.; Fayet, S.; Ponche, J.-L.
2005-03-01
Within the frame of the ESCOMPTE program, a spatial emission inventory and an emission database aimed at tropospheric photochemistry intercomparison modeling has been developed under the scientific supervision of the LPCA with the help of the regional coordination of Air Quality network AIRMARAIX. This inventory has been established for all categories of sources (stationary, mobile and biogenic sources) over a domain of 19,600 km 2 centered on the cities of Marseilles-Aix-en-Provence in the southeastern part of France with a spatial resolution of 1 km 2. A yearly inventory for 1999 has been established, and hourly emission inventories for 23 days of June and July 2000 and 2001, corresponding to the intensive measurement periods, have been produced. The 104 chemical species in the inventory have been selected to be relevant with respect to photochemistry modeling according to available data. The entire list of species in the inventory numbers 216 which will allow other future applications of this database. This database is presently the most detailed and complete regional emission database in France. In addition, the database structure and the emission calculation modules have been designed to ensure a better sustainability and upgradeability, being provided with appropriate maintenance software. The general organization and method is summarized and the results obtained for both yearly and hourly emissions are detailed and discussed. Some comparisons have been performed with the existing results in this region to ensure the congruency of the results. This leads to confirm the relevance and the consistency of the ESCOMPTE emission inventory.
NASA Astrophysics Data System (ADS)
Wiacek, Daniel; Kudla, Ignacy M.; Pozniak, Krzysztof T.; Bunkowski, Karol
2005-02-01
The main task of the RPC (Resistive Plate Chamber) Muon Trigger monitoring system design for the CMS (Compact Muon Solenoid) experiment (at LHC in CERN Geneva) is the visualization of data that includes the structure of electronic trigger system (e.g. geometry and imagery), the way of its processes and to generate automatically files with VHDL source code used for programming of the FPGA matrix. In the near future, the system will enable the analysis of condition, operation and efficiency of individual Muon Trigger elements, registration of information about some Muon Trigger devices and present previously obtained results in interactive presentation layer. A broad variety of different database and programming concepts for design of Muon Trigger monitoring system was presented in this article. The structure and architecture of the system and its principle of operation were described. One of ideas for building this system is use object-oriented programming and design techniques to describe real electronics systems through abstract object models stored in database and implement these models in Java language.
2016-03-24
Corporation found that increases in schedule effort tend to be the reason for increases in the cost of acquiring a new weapons system due to, at a minimum...in-depth finance and schedule data for selected programs (Brown et al., 2015). We also give extra focus on Research Development Test & Evaluation...we create and employ an entirely new database. The database we utilize for our research is a database originally built by the RAND Corporation for
ERIC Educational Resources Information Center
Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June
2003-01-01
Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
CampusGIS of the University of Cologne: a tool for orientation, navigation, and management
NASA Astrophysics Data System (ADS)
Baaser, U.; Gnyp, M. L.; Hennig, S.; Hoffmeister, D.; Köhn, N.; Laudien, R.; Bareth, G.
2006-10-01
The working group for GIS and Remote Sensing at the Department of Geography at the University of Cologne has established a WebGIS called CampusGIS of the University of Cologne. The overall task of the CampusGIS is the connection of several existing databases at the University of Cologne with spatial data. These existing databases comprise data about staff, buildings, rooms, lectures, and general infrastructure like bus stops etc. These information were yet not linked to their spatial relation. Therefore, a GIS-based method is developed to link all the different databases to spatial entities. Due to the philosophy of the CampusGIS, an online-GUI is programmed which enables users to search for staff, buildings, or institutions. The query results are linked to the GIS database which allows the visualization of the spatial location of the searched entity. This system was established in 2005 and is operational since early 2006. In this contribution, the focus is on further developments. First results of (i) including routing services in, (ii) programming GUIs for mobile devices for, and (iii) including infrastructure management tools in the CampusGIS are presented. Consequently, the CampusGIS is not only available for spatial information retrieval and orientation. It also serves for on-campus navigation and administrative management.
The Steward Observatory asteroid relational database
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1992-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. A browse capability allows the user to explore the contents of any data file. SOARD offers, also, an asteroid bibliography containing about 13,000 references. The program has online help as well as user and programmer documentation manuals. SOARD continues to provide data to fulfill requests by members of the astronomical community and will continue to grow as data is added to the database and new features are added to the program.
Basner, Jodi E.; Theisz, Katrina I.; Jensen, Unni S.; Jones, C. David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G.; Franca-Koh, Jonathan; Hanlon, Sean E.; Kuhn, Nastaran Z.; Nagahara, Larry A.; Schnell, Joshua D.; Moore, Nicole M.
2013-01-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences—Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program. PMID:24808632
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request (60-Day FRN); The Clinical Trials Reporting Program (CTRP) Database (NCI) SUMMARY: In compliance... publication. Proposed Collection: The Clinical Trials Reporting Program (CTRP) Database, 0925-0600, Expiration...
Enhancement to Hitran to Support the NASA EOS Program
NASA Technical Reports Server (NTRS)
Kirby, Kate P.; Rothman, Laurence S.
1998-01-01
The HITRAN molecular database has been enhanced with the object of providing improved capabilities for the EOS program scientists. HITRAN itself is the database of high-resolution line parameters of gaseous species expected to be observed by the EOS program in its remote sensing activities. The database is part of a larger compilation that includes IR cross-sections, aerosol indices of refraction, and software for filtering and plotting portions of the database. These properties have also been improved. The software has been advanced in order to work on multiple platforms. Besides the delivery of the compilation on CD-ROM, the effort has been directed toward making timely access of data and software on the world wide web.
Enhancement to HITRAN to Support the NASA EOS Program
NASA Technical Reports Server (NTRS)
Kirby, Kate P.; Rothman, Laurence S.
1999-01-01
The HITRAN molecular database has been enhanced with the object of providing improved capabilities for the EOS program scientists. HITRAN itself is the database of high-resolution line parameters of gaseous species expected to be observed by the EOS program in its remote sensing activities. The database is part of a larger compilation that includes IR cross-sections, aerosol indices of refraction, and software for filtering and plotting portions of the database. These properties have also been improved. The software has been advanced in order to work on multiple platforms. Besides the delivery of the compilation on CD-ROM, the effort has been directed toward making timely access of data and software on the world wide web.
A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories
NASA Astrophysics Data System (ADS)
Brown, Christa L.
National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.
Migration of legacy mumps applications to relational database servers.
O'Kane, K C
2001-07-01
An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
NASA Technical Reports Server (NTRS)
Brenton, J. C.; Barbre, R. E.; Decker, R. K.; Orcutt, J. M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) provides atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large datasets consists of ensuring erroneous data are removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, development methodologies, and periods of record. The goal of this activity is to use the previous efforts to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, It is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
[Establishement for regional pelvic trauma database in Hunan Province].
Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua
2017-04-28
To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry. Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6. Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management. Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.
Gold, L S; Manley, N B; Slone, T H; Garfinkel, G B; Rohrbach, L; Ames, B N
1993-01-01
This paper is the fifth plot of the Carcinogenic Potency Database (CPDB) that first appeared in this journal in 1984 (1-5). We report here results of carcinogenesis bioassays published in the general literature between January 1987 and December 1988, and in technical reports of the National Toxicology Program between July 1987 and December 1989. This supplement includes results of 412 long-term, chronic experiments of 147 test compounds and reports the same information about each experiment in the same plot format as the earlier papers: the species and strain of test animal, the route and duration of compound administration, dose level and other aspects of experimental protocol, histopathology and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, author's opinion about carcinogenicity, and literature citation. We refer the reader to the 1984 publications (1,5,6) for a guide to the plot of the database, a complete description of the numerical index of carcinogenic potency, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The five plots of the database are to be used together, as results of individual experiments that were published earlier are not repeated. In all, the five plots include results of 4487 experiments on 1136 chemicals. Several analyses based on the CPDB that were published earlier are described briefly, and updated results based on all five plots are given for the following earlier analyses: the most potent TD50 value by species, reproducibility of bioassay results, positivity rates, and prediction between species.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8354183
Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality.
Hinds, Richard M; Danna, Natalie R; Capo, John T; Mroczek, Kenneth J
2017-08-01
The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.
Energy Efficiency Finance Programs: Use Case Analysis to Define Data Needs and Guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Peter; Larsen, Peter; Kramer, Chris
There are over 200 energy efficiency loan programs—across 49 U.S. states—administered by utilities, state/local government agencies, or private lenders.1 This distributed model has led to significant variation in program design and implementation practices including how data is collected and used. The challenge of consolidating and aggregating data across independently administered programs has been illustrated by a recent pilot of an open source database for energy efficiency financing program data. This project was led by the Environmental Defense Fund (EDF), the Investor Confidence Project, the Clean Energy Finance Center (CEFC), and the University of Chicago. This partnership discussed data collection practicesmore » with a number of existing energy efficiency loan programs and identified four programs that were suitable and willing to participate in the pilot database (Diamond 2014).2 The partnership collected information related to ~12,000 loans with an aggregate value of ~$100M across the four programs. Of the 95 data fields collected across the four programs, 30 fields were common between two or more programs and only seven data fields were common across all programs. The results of that pilot study illustrate the inconsistencies in current data definition and collection practices among energy efficiency finance programs and may contribute to certain barriers.« less
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T
2016-01-04
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan-Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization
NASA Technical Reports Server (NTRS)
Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.
1997-01-01
Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.
Kendall, Sacha; Redshaw, Sarah; Ward, Stephen; Wayland, Sarah; Sullivan, Elizabeth
2018-03-02
The paper presents a systematic review and metasynthesis of findings from qualitative evaluations of community reentry programs. The programs sought to engage recently released adult prison inmates with either problematic drug use or a mental health disorder. Seven biomedical and social science databases, Cinahl, Pubmed, Scopus, Proquest, Medline, Sociological abstracts and Web of Science and publisher database Taylor and Francis were searched in 2016 resulting in 2373 potential papers. Abstract reviews left 140 papers of which 8 were included after detailed review. Major themes and subthemes were identified through grounded theory inductive analysis of results from the eight papers. Of the final eight papers the majority (6) were from the United States. In total, the papers covered 405 interviews and included 121 (30%) females and 284 (70%) males. Findings suggest that the interpersonal skills of case workers; access to social support and housing; and continuity of case worker relationships throughout the pre-release and post-release period are key social and structural factors in program success. Evaluation of community reentry programs requires qualitative data to contextualize statistical findings and identify social and structural factors that impact on reducing incarceration and improving participant health. These aspects of program efficacy have implications for reentry program development and staff training and broader social and health policy and services.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-09
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Submission for OMB Review; 30-day Comment Request: The Clinical Trials Reporting Program (CTRP) Database (NCI) SUMMARY: Under... Program (CTRP) Database, 0925-0600, Expiration Date 3/31/2013--REINSTATEMENT WITH CHANGE, National Cancer...
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
NASA Astrophysics Data System (ADS)
Bashev, A.
2012-04-01
Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.
NASA Technical Reports Server (NTRS)
Steck, Daniel
2009-01-01
This report documents the generation of an outbound Earth to Moon transfer preliminary database consisting of four cases calculated twice a day for a 19 year period. The database was desired as the first step in order for NASA to rapidly generate Earth to Moon trajectories for the Constellation Program using the Mission Assessment Post Processor. The completed database was created running a flight trajectory and optimization program, called Copernicus, in batch mode with the use of newly created Matlab functions. The database is accurate and has high data resolution. The techniques and scripts developed to generate the trajectory information will also be directly used in generating a comprehensive database.
Using Geocoded Databases in Teaching Urban Historical Geography.
ERIC Educational Resources Information Center
Miller, Roger P.
1986-01-01
Provides information regarding hardware and software requirements for using geocoded databases in urban historical geography. Reviews 11 IBM and Apple Macintosh database programs and describes the pen plotter and digitizing table interface used with the databases. (JDH)
ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes
Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim
2010-01-01
Motivation: Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith–Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid™, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. Availability: The database can be accessed through http://proteinworlddb.org Contact: otto@fiocruz.br PMID:20089515
Duchman, Kyle R; Gao, Yubo; Miller, Benjamin J
2015-04-01
The current study aims to determine cause-specific survival in patients with Ewing's sarcoma while reporting clinical risk factors for survival. The Surveillance, Epidemiology, and End Results (SEER) Program database was used to identify patients with osseous Ewing's sarcoma from 1991 to 2010. Patient, tumor, and socioeconomic variables were analyzed to determine prognostic factors for survival. There were 1163 patients with Ewing's sarcoma identified in the SEER Program database. The 10-year cause-specific survival for patients with non-metastatic disease at diagnosis was 66.8% and 28.1% for patients with metastatic disease. Black patients demonstrated reduced survival at 10 years with an increased frequency of metastatic disease at diagnosis as compared to patients of other race, while Hispanic patients more frequently presented with tumor size>10cm. Univariate analysis revealed that metastatic disease at presentation, tumor size>10cm, axial tumor location, patient age≥20 years, black race, and male sex were associated with decreased cause-specific survival at 10 years. Metastatic disease at presentation, axial tumor location, tumor size>10cm, and age≥20 years remained significant in the multivariate analysis. Patients with Ewing's sarcoma have decreased cause-specific survival at 10 years when metastatic at presentation, axial tumor location, tumor size>10cm, and patient age≥20 years. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Myint-U, Athi; O'Donnell, Lydia; Osher, David; Petrosino, Anthony; Stueve, Ann
2008-01-01
Despite evidence that some dropout prevention programs have positive effects, whether districts in the region are using such evidence-based programs has not been documented. To generate and share knowledge on dropout programs and policies, this report details a project to create a searchable database with information on target audiences,…
Establishment and Assessment of Plasma Disruption and Warning Databases from EAST
NASA Astrophysics Data System (ADS)
Wang, Bo; Robert, Granetz; Xiao, Bingjia; Li, Jiangang; Yang, Fei; Li, Junjun; Chen, Dalong
2016-12-01
Disruption database and disruption warning database of the EAST tokamak had been established by a disruption research group. The disruption database, based on Structured Query Language (SQL), comprises 41 disruption parameters, which include current quench characteristics, EFIT equilibrium characteristics, kinetic parameters, halo currents, and vertical motion. Presently most disruption databases are based on plasma experiments of non-superconducting tokamak devices. The purposes of the EAST database are to find disruption characteristics and disruption statistics to the fully superconducting tokamak EAST, to elucidate the physics underlying tokamak disruptions, to explore the influence of disruption on superconducting magnets and to extrapolate toward future burning plasma devices. In order to quantitatively assess the usefulness of various plasma parameters for predicting disruptions, a similar SQL database to Alcator C-Mod for EAST has been created by compiling values for a number of proposed disruption-relevant parameters sampled from all plasma discharges in the 2015 campaign. The detailed statistic results and analysis of two databases on the EAST tokamak are presented. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2014GB103000)
Resources | Office of Cancer Genomics
OCG provides a variety of scientific and educational resources for both cancer researchers and members of the general public. These resources are divided into the following types: OCG-Supported Resources: Tools, databases, and reagents generated by initiated and completed OCG programs for researchers, educators, and students. (Note: Databases for current OCG programs are available through program-specific data matrices)
ERIC Educational Resources Information Center
Pfeiffer, Jay J.
Florida's Education and Training Placement Information Program (FETPIP) is a statewide system linking the administrative databases of certain state and federal agencies to collect follow-up data on former students or program participants. The databases that are collected include those of the Florida Department of Corrections; Florida Department of…
The NSO FTS database program and archive (FTSDBM)
NASA Technical Reports Server (NTRS)
Lytle, D. M.
1992-01-01
Data from the NSO Fourier transform spectrometer is being re-archived from half inch tape onto write-once compact disk. In the process, information about each spectrum and a low resolution copy of each spectrum is being saved into an on-line database. FTSDBM is a simple database management program in the NSO external package for IRAF. A command language allows the FTSDBM user to add entries to the database, delete entries, select subsets from the database based on keyword values including ranges of values, create new database files based on these subsets, make keyword lists, examine low resolution spectra graphically, and make disk number/file number lists. Once the archive is complete, FTSDBM will allow the database to be efficiently searched for data of interest to the user and the compact disk format will allow random access to that data.
Process description language: an experiment in robust programming for manufacturing systems
NASA Astrophysics Data System (ADS)
Spooner, Natalie R.; Creak, G. Alan
1998-10-01
Maintaining stable, robust, and consistent software is difficult in face of the increasing rate of change of customers' preferences, materials, manufacturing techniques, computer equipment, and other characteristic features of manufacturing systems. It is argued that software is commonly difficult to keep up to date because many of the implications of these changing features on software details are obscure. A possible solution is to use a software generation system in which the transformation of system properties into system software is made explicit. The proposed generation system stores the system properties, such as machine properties, product properties and information on manufacturing techniques, in databases. As a result this information, on which system control is based, can also be made available to other programs. In particular, artificial intelligence programs such as fault diagnosis programs, can benefit from using the same information as the control system, rather than a separate database which must be developed and maintained separately to ensure consistency. Experience in developing a simplified model of such a system is presented.
ERIC Educational Resources Information Center
Bigelow, Robert A.
The Delaware Educational Assessment Program publishes annual results from the California Test of Basic Skills by district, school, and grade (1 through 8 and 11). A statewide computer information system was developed to manage the testing program, the massive 10-year longitudinal database, and the information requests received. The Delaware…
Comparison of Programs Used for FIA Inventory Information Dissemination and Spatial Representation
Roger C. Lowe; Chris J. Cieszewski
2005-01-01
Six online applications developed for the interactive display of Forest Inventory and Analysis (FIA) data in which FIA database information and query results can be viewed as or selected from interactive geographic maps are compared. The programs evaluated are the U.S. Department of Agriculture Forest Service?s online systems; a SAS server-based mapping system...
ERIC Educational Resources Information Center
Terzian; Mary; Moore, Kristin Anderson; Williams-Taylor, Lisa; Nguyen, Hoan
2009-01-01
Child Trends produced this Guide to assist funders, administrators, and practitioners in identifying and navigating online resources to find evidence-based programs that may be appropriate for their target populations and communities. The Guide offers an overview of 21 of these resources--11 searchable online databases, 2 online interactive…
The purpose of this SOP is to define the procedures involved in appending cleaned individual data batches to the master databases. This procedure applies to the Arizona NHEXAS project and the Border study. Keywords: data; database.
The U.S.-Mexico Border Program is sponsored b...
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A
2014-01-01
We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
Kentucky geotechnical database.
DOT National Transportation Integrated Search
2005-03-01
Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...
The NASA Goddard Group's Source Monitoring Database and Program
NASA Astrophysics Data System (ADS)
Gipson, John; Le Bail, Karine; Ma, Chopo
2014-12-01
Beginning in 2003, the Goddard VLBI group developed a program to purposefully monitor when sources were observed and to increase the observations of ``under-observed'' sources. The heart of the program consists of a MySQL database that keeps track of, on a session-by-session basis: the number of observations that are scheduled for a source, the number of observations that are successfully correlated, and the number of observations that are used in a session. In addition, there is a table that contains the target number of successful sessions over the last twelve months. Initially this table just contained two categories. Sources in the geodetic catalog had a target of 12 sessions/year; the remaining ICRF-1 defining sources had a target of two sessions/year. All other sources did not have a specific target. As the program evolved, different kinds of sources with different observing targets were added. During the scheduling process, the scheduler has the option of automatically selecting N sources which have not met their target. We discuss the history and present some results of this successful program.
Amick, G D
1999-01-01
A database containing names of mass spectral data files generated in a forensic toxicology laboratory and two Microsoft Visual Basic programs to maintain and search this database is described. The data files (approximately 0.5 KB/each) were collected from six mass spectrometers during routine casework. Data files were archived on 650 MB (74 min) recordable CD-ROMs. Each recordable CD-ROM was given a unique name, and its list of data file names was placed into the database. The present manuscript describes the use of search and maintenance programs for searching and routine upkeep of the database and creation of CD-ROMs for archiving of data files.
Meekers, Dominique; Rahaim, Stephen
2005-01-01
Background Over the past two decades, social marketing programs have become an important element of the national family planning and HIV prevention strategy in several developing countries. As yet, there has not been any comprehensive empirical assessment to determine which of several social marketing models is most effective for a given socio-economic context. Such an assessment is urgently needed to inform the design of future social marketing programs, and to avoid that programs are designed using an ineffective model. Methods This study addresses this issue using a database of annual statistics about reproductive health oriented social marketing programs in over 70 countries. In total, the database covers 555 years of program experience with social marketing programs that distribute and promote the use of oral contraceptives and condoms. Specifically, our analysis assesses to what extent the model used by different reproductive health social marketing programs has varied across different socio-economic contexts. We then use random effects regression to test in which socio-economic context each of the models is most successful at increasing use of socially marketed oral contraceptives and condoms. Results The results show that there has been a tendency to design reproductive health social marketing program with a management structure that matches the local context. However, the evidence also shows that this has not always been the case. While socio-economic context clearly influences the effectiveness of some of the social marketing models, program maturity and the size of the target population appear equally important. Conclusions To maximize the effectiveness of future social marketing programs, it is essential that more effort is devoted to ensuring that such programs are designed using the model or approach that is most suitable for the local context. PMID:15676068
Modernization and multiscale databases at the U.S. geological survey
Morrison, J.L.
1992-01-01
The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.
Information Technology Support in the 8000 Directorate
NASA Technical Reports Server (NTRS)
2004-01-01
My summer internship was spent supporting various projects within the Environmental Management Office and Glenn Safety Office. Mentored by Eli Abumeri, I was trained in areas of Information Technology such as: Servers, printers, scanners, CAD systems, Web, Programming, and Database Management, ODIN (networking, computers, and phones). I worked closely with the Chemical Sampling and Analysis Team (CSAT) to redesign a database to more efficiently manage and maintain data collected for the Drinking Water Program. This Program has been established for over fifteen years here at the Glenn Research Center. It involves the continued testing and retesting of all drinking water dispensers. The quality of the drinking water is of great importance and is determined by comparing the concentration of contaminants in the water with specifications set forth by the Environmental Protection Agency (EPA) in the Safe Drinking Water Act (SDWA) and its 1986 and 1991 amendments. The Drinking Water Program consists of periodic testing of all drinking water fountains and sinks. Each is tested at least once every 2 years for contaminants and naturally occurring species. The EPA's protocol is to collect an initial and a 5 minute draw from each dispenser. The 5 minute draw is what is used for the maximum contaminant level. However, the CS&AT has added a 30 second draw since most individuals do not run the water 5 minutes prior to drinking. This data is then entered into a relational Microsoft Access database. The database allows for the quick retrieval of any test@) done on any dispenser. The data can be queried by building number, date or test type, and test results are documented in an analytical report for employees to read. To aid with the tracking of recycled materials within the lab, my help was enlisted to create a database that could make this process less cumbersome and more efficient. The date of pickup, type of material, weight received, and unit cost per recyclable. This information could then calculate the dollar amount generated by the recycling of certain materials. This database will ultimately prove useful in determining the amounts of materials consumed by the lab and will help serve as an indicator potential overuse.
Knowledge Discovery in Variant Databases Using Inductive Logic Programming
Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D.
2013-01-01
Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/. PMID:23589683
Knowledge discovery in variant databases using inductive logic programming.
Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D
2013-01-01
Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/.
Haytowitz, David B; Pehrsson, Pamela R
2018-01-01
For nearly 20years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture's (USDA) food composition databases (FCDB) through the collection and analysis of nationally representative food samples. NFNAP employs statistically valid sampling plans, the Key Foods approach to identify and prioritize foods and nutrients, comprehensive quality control protocols, and analytical oversight to generate new and updated analytical data for food components. NFNAP has allowed the Nutrient Data Laboratory to keep up with the dynamic US food supply and emerging scientific research. Recently generated results for nationally representative food samples show marked changes compared to previous database values for selected nutrients. Monitoring changes in the composition of foods is critical in keeping FCDB up-to-date, so that they remain a vital tool in assessing the nutrient intake of national populations, as well as for providing dietary advice. Published by Elsevier Ltd.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellie, C.L.
This plan establishes the integrated management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford Site Technical Baseline.
ERIC Educational Resources Information Center
Lloyd-Strovas, Jenny D.; Arsuffi, Thomas L.
2016-01-01
We examined the diversity of environmental education (EE) in Texas, USA, by developing a framework to assess EE organizations and programs at a large scale: the Environmental Education Database of Organizations and Programs (EEDOP). This framework consisted of the following characteristics: organization/visitor demographics, pedagogy/curriculum,…
Basques, B A; McLynn, R P; Lukasiewicz, A M; Samuel, A M; Bohl, D D; Grauer, J N
2018-02-01
The aims of this study were to characterize the frequency of missing data in the National Surgical Quality Improvement Program (NSQIP) database and to determine how missing data can influence the results of studies dealing with elderly patients with a fracture of the hip. Patients who underwent surgery for a fracture of the hip between 2005 and 2013 were identified from the NSQIP database and the percentage of missing data was noted for demographics, comorbidities and laboratory values. These variables were tested for association with 'any adverse event' using multivariate regressions based on common ways of handling missing data. A total of 26 066 patients were identified. The rate of missing data was up to 77.9% for many variables. Multivariate regressions comparing three methods of handling missing data found different risk factors for postoperative adverse events. Only seven of 35 identified risk factors (20%) were common to all three analyses. Missing data is an important issue in national database studies that researchers must consider when evaluating such investigations. Cite this article: Bone Joint J 2018;100-B:226-32. ©2018 The British Editorial Society of Bone & Joint Surgery.
Developing a High Level Data Base to Teach Reproductive Endocrinology Using the HyperCard Program.
ERIC Educational Resources Information Center
Friedler, Yael; Shabo, Amnon
1990-01-01
Describes a database courseware using the HyperCard program on the subject of human reproductive endocrinology and feedback mechanisms. Discusses some issues concerning database courseware development. Presents several examples of the courseware display. (Author/YP)
Opportune Landing Site CBR and Low-Density Laboratory Database
2008-05-01
Program Opportune Landing Site CBR and Low- Density Laboratory Database Larry S. Danyluk, Sally A. Shoop, Rosa T. Affleck, and Wendy L. Wieder...Opportune Landing Site Program ERDC/CRREL TR-08-9 May 2008 Opportune Landing Site CBR and Low- Density Laboratory Database Larry S. Danyluk, Sally A...reproduce in-situ density , moisture, and CBR values and therefore do not accurately repre- sent the complete range of these values measured in the field
77 FR 66880 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... the database that stores information for the Lost and Stolen Securities Program. We estimate that 26... Lost and Stolen Securities Program database will be kept confidential. The Commission may not conduct... SECURITIES AND EXCHANGE COMMISSION Submission for OMB Review; Comment Request Upon Written Request...
DOT National Transportation Integrated Search
2006-05-01
Specific objectives of the Peer Exchange were: : Discuss and exchange information about databases and other software : used to support the program-cycles managed by state transportation : research offices. Elements of the program cycle include: :...
Family and Other Impacts on Retention
1992-04-01
provide the Army with an invaluable database for evaluating and designing policies and programs to enhance Army retention objectives. These programs... policy , as well as other aspects of the military force. Concurrently, continuing economic growth in the private sector will result in higher levels...work on retention and on the broader body of research on job satisfaction and job turnover. More recently, there has been both policy and theoretical
NASA Technical Reports Server (NTRS)
Hess, Elizabeth L.; Wallace-Robinson, Janice; Dickson, Katherine J.; Powers, Janet V.
1992-01-01
A 10-year cumulative bibliography of publications resulting from research supported by the musculoskeletal discipline of the space physiology and countermeasures program of NASA's Life Sciences Division is provided. Primary subjects are bone, mineral, and connective tissue, and muscle. General physiology references are also included. Principal investigators whose research tasks resulted in publication are identified by asterisk. Publications are identified by a record number corresponding with their entry in the life sciences bibliographic database, maintained by the George Washington University.
Flexible Reporting of Clinical Data
Andrews, Robert D.
1987-01-01
Two prototype methods have been developed to aid in the presentation of relevant clinical data: 1) an integrated report that displays results from a patient's computer-stored data and also allows manual entry of data, and 2) a graph program that plots results of multiple kinds of tests. These reports provide a flexible means of displaying data to help evaluate patient treatment. The two methods also explore ways of integrating the display of data from multiple components of the Veterans Administration's (VA) Decentralized Hospital Computer Program (DHCP) database.
NASA Astrophysics Data System (ADS)
Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.
2008-12-01
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user-defined compilations. Query results are displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, when supported by the data, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams.
Critical care procedure logging using handheld computers
Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E
2004-01-01
Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577
Determination of resilient modulus values for typical plastic soils in Wisconsin.
DOT National Transportation Integrated Search
2011-09-01
"The objectives of this research are to establish a resilient modulus test results database and to develop : correlations for estimating the resilient modulus of Wisconsin fine-grained soils from basic soil properties. A : laboratory testing program ...
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, A.G.
This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline.
Song, Sun Ok; Jung, Chang Hee; Song, Young Duk; Park, Cheol-Young; Kwon, Hyuk-Sang; Cha, Bong Soo; Park, Joong-Yeol; Lee, Ki-Up
2014-01-01
Background The National Health Insurance Service (NHIS) recently signed an agreement to provide limited open access to the databases within the Korean Diabetes Association for the benefit of Korean subjects with diabetes. Here, we present the history, structure, contents, and way to use data procurement in the Korean National Health Insurance (NHI) system for the benefit of Korean researchers. Methods The NHIS in Korea is a single-payer program and is mandatory for all residents in Korea. The three main healthcare programs of the NHI, Medical Aid, and long-term care insurance (LTCI) provide 100% coverage for the Korean population. The NHIS in Korea has adopted a fee-for-service system to pay health providers. Researchers can obtain health information from the four databases of the insured that contain data on health insurance claims, health check-ups and LTCI. Results Metabolic disease as chronic disease is increasing with aging society. NHIS data is based on mandatory, serial population data, so, this might show the time course of disease and predict some disease progress, and also be used in primary and secondary prevention of disease after data mining. Conclusion The NHIS database represents the entire Korean population and can be used as a population-based database. The integrated information technology of the NHIS database makes it a world-leading population-based epidemiology and disease research platform. PMID:25349827
National Rehabilitation Information Center
... search the NARIC website or one of our databases Select a database or search for a webpage A NARIC webpage ... Projects conducting research and/or development (NIDILRR Program Database). Organizations, agencies, and online resources that support people ...
The Reach Address Database (RAD)
The Reach Address Database (RAD) stores reach address information for each Water Program feature that has been linked to the underlying surface water features (streams, lakes, etc) in the National Hydrology Database (NHD) Plus dataset.
LigandBox: A database for 3D structures of chemical compounds
Kawabata, Takeshi; Sugihara, Yusuke; Fukunishi, Yoshifumi; Nakamura, Haruki
2013-01-01
A database for the 3D structures of available compounds is essential for the virtual screening by molecular docking. We have developed the LigandBox database (http://ligandbox.protein.osaka-u.ac.jp/ligandbox/) containing four million available compounds, collected from the catalogues of 37 commercial suppliers, and approved drugs and biochemical compounds taken from KEGG_DRUG, KEGG_COMPOUND and PDB databases. Each chemical compound in the database has several 3D conformers with hydrogen atoms and atomic charges, which are ready to be docked into receptors using docking programs. The 3D conformations were generated using our molecular simulation program package, myPresto. Various physical properties, such as aqueous solubility (LogS) and carcinogenicity have also been calculated to characterize the ADME-Tox properties of the compounds. The Web database provides two services for compound searches: a property/chemical ID search and a chemical structure search. The chemical structure search is performed by a descriptor search and a maximum common substructure (MCS) search combination, using our program kcombu. By specifying a query chemical structure, users can find similar compounds among the millions of compounds in the database within a few minutes. Our database is expected to assist a wide range of researchers, in the fields of medical science, chemical biology, and biochemistry, who are seeking to discover active chemical compounds by the virtual screening. PMID:27493549
LigandBox: A database for 3D structures of chemical compounds.
Kawabata, Takeshi; Sugihara, Yusuke; Fukunishi, Yoshifumi; Nakamura, Haruki
2013-01-01
A database for the 3D structures of available compounds is essential for the virtual screening by molecular docking. We have developed the LigandBox database (http://ligandbox.protein.osaka-u.ac.jp/ligandbox/) containing four million available compounds, collected from the catalogues of 37 commercial suppliers, and approved drugs and biochemical compounds taken from KEGG_DRUG, KEGG_COMPOUND and PDB databases. Each chemical compound in the database has several 3D conformers with hydrogen atoms and atomic charges, which are ready to be docked into receptors using docking programs. The 3D conformations were generated using our molecular simulation program package, myPresto. Various physical properties, such as aqueous solubility (LogS) and carcinogenicity have also been calculated to characterize the ADME-Tox properties of the compounds. The Web database provides two services for compound searches: a property/chemical ID search and a chemical structure search. The chemical structure search is performed by a descriptor search and a maximum common substructure (MCS) search combination, using our program kcombu. By specifying a query chemical structure, users can find similar compounds among the millions of compounds in the database within a few minutes. Our database is expected to assist a wide range of researchers, in the fields of medical science, chemical biology, and biochemistry, who are seeking to discover active chemical compounds by the virtual screening.
Inferring Network Controls from Topology Using the Chomp Database
2015-12-03
AFRL-AFOSR-VA-TR-2016-0033 INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE John Harer DUKE UNIVERSITY Final Report 12/03/2015...INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0436 5c. PROGRAM ELEMENT NUMBER 6...area of Topological Data Analysis (TDA) and it’s application to dynamical systems. The role of this work in the Complex Networks program is based on
A Recommender System in the Cyber Defense Domain
2014-03-27
monitoring software is a java based program sending updates to the database on the sensor machine. The host monitoring program gathers information about...3.2.2 Database. A MySQL database located on the sensor machine acts as the storage for the sensors on the network. Snort, Nmap, vulnerability scores, and...machine with the IDS and the recommender is labeled “sensor”. The recommender system code is written in java and compiled using java version 1.6.024
1987-12-01
Application Programs Intelligent Disk Database Controller Manangement System Operating System Host .1’ I% Figure 2. Intelligent Disk Controller Application...8217. /- - • Database Control -% Manangement System Disk Data Controller Application Programs Operating Host I"" Figure 5. Processor-Per- Head data. Therefore, the...However. these ad- ditional properties have been proven in classical set and relation theory [75]. These additional properties are described here
Database for propagation models
NASA Astrophysics Data System (ADS)
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Choosing the Right Database Management Program.
ERIC Educational Resources Information Center
Vockell, Edward L.; Kopenec, Donald
1989-01-01
Provides a comparison of four database management programs commonly used in schools: AppleWorks, the DOS 3.3 and ProDOS versions of PFS, and MECC's Data Handler. Topics discussed include information storage, spelling checkers, editing functions, search strategies, graphs, printout formats, library applications, and HyperCard. (LRW)
NASA Astrophysics Data System (ADS)
Rack, F. R.
2005-12-01
The Integrated Ocean Drilling Program (IODP: 2003-2013 initial phase) is the successor to the Deep Sea Drilling Project (DSDP: 1968-1983) and the Ocean Drilling Program (ODP: 1985-2003). These earlier scientific drilling programs amassed collections of sediment and rock cores (over 300 kilometers stored in four repositories) and data organized in distributed databases and in print or electronic publications. International members of the IODP have established, through memoranda, the right to have access to: (1) all data, samples, scientific and technical results, all engineering plans, data or other information produced under contract to the program; and, (2) all data from geophysical and other site surveys performed in support of the program which are used for drilling planning. The challenge that faces the individual platform operators and management of IODP is to find the right balance and appropriate synergies among the needs, expectations and requirements of stakeholders. The evolving model for IODP database services consists of the management and integration of data collected onboard the various IODP platforms (including downhole logging and syn-cruise site survey information), legacy data from DSDP and ODP, data derived from post-cruise research and publications, and other IODP-relevant information types, to form a common, program-wide IODP information system (e.g., IODP Portal) which will be accessible to both researchers and the public. The JANUS relational database of ODP was introduced in 1997 and the bulk of ODP shipboard data has been migrated into this system, which is comprised of a relational data model consisting of over 450 tables. The JANUS database includes paleontological, lithostratigraphic, chemical, physical, sedimentological, and geophysical data from a global distribution of sites. For ODP Legs 100 through 210, and including IODP Expeditions 301 through 308, JANUS has been used to store data from 233,835 meters of core recovered, which are comprised of 38,039 cores, with 202,281 core sections stored in repositories, which have resulted in the taking of 2,299,180 samples for scientists and other users (http://iodp.tamu.edu/janusweb/general/dbtable.cgi). JANUS and other IODP databases are viewed as components of an evolving distributed network of databases, supported by metadata catalogs and middleware with XML workflows, that are intended to provide access to DSDP/ODP/IODP cores and sample-based data as well as other distributed geoscience data collections (e.g., CHRONOS, PetDB, SedDB). These data resources can be explored through the use of emerging data visualization environments, such as GeoWall, CoreWall (http://(www.evl.uic.edu/cavern/corewall), a multi-screen display for viewing cores and related data, GeoWall-2 and LambdaVision, a very-high resolution, networked environment for data exploration and visualization, and others. The U.S Implementing Organization (USIO) for the IODP, also known as the JOI Alliance, is a partnership between Joint Oceanographic Institutions (JOI), Texas A&M University, and Lamont-Doherty Earth Observatory of Columbia University. JOI is a consortium of 20 premier oceanographic research institutions that serves the U.S. scientific community by leading large-scale, global research programs in scientific ocean drilling and ocean observing. For more than 25 years, JOI has helped facilitate discovery and advance global understanding of the Earth and its oceans through excellence in program management.
Lucey, K.J.
1990-01-01
The U.S. Geological Survey conducts an external blind sample quality assurance project for its National Water Quality Laboratory in Denver, Colorado, based on the analysis of reference water samples. Reference samples containing selected inorganic and nutrient constituents are disguised as environmental samples at the Survey 's office in Ocala, Florida, and are sent periodically through other Survey offices to the laboratory. The results of this blind sample project indicate the quality of analytical data produced by the laboratory. This report provides instructions on the use of QADATA, an interactive, menu-driven program that allows users to retrieve the results of the blind sample quality- assurance project. The QADATA program, which is available on the U.S. Geological Survey 's national computer network, accesses a blind sample data base that contains more than 50,000 determinations from the last five water years for approximately 40 constituents at various concentrations. The data can be retrieved from the database for any user- defined time period and for any or all available constituents. After the user defines the retrieval, the program prepares statistical tables, control charts, and precision plots and generates a report which can be transferred to the user 's office through the computer network. A discussion of the interpretation of the program output is also included. This quality assurance information will permit users to document the quality of the analytical results received from the laboratory. The blind sample data is entered into the database within weeks after being produced by the laboratory and can be retrieved to meet the needs of specific projects or programs. (USGS)
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre, Robert E.; Orcutt, John M.; Decker, Ryan K.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER is one of the most heavily instrumented sites in the United States measuring various atmospheric parameters on a continuous basis. An inherent challenge with the large databases that EV44 receives from the ER consists of ensuring erroneous data are removed from the databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments; however, no standard QC procedures for all databases currently exist resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build flags within the meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC checks are described. The flagged data points will be plotted in a graphical user interface (GUI) as part of a manual confirmation that the flagged data do indeed need to be removed from the archive. As the rate of launches increases with additional launch vehicle programs, more emphasis is being placed to continually update and check weather databases for data quality before use in launch vehicle design and certification analyses.
Software support for Huntingtons disease research.
Conneally, P M; Gersting, J M; Gray, J M; Beidleman, K; Wexler, N S; Smith, C L
1991-01-01
Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Strategies for Introducing Databasing into Science.
ERIC Educational Resources Information Center
Anderson, Christopher L.
1990-01-01
Outlines techniques used in the context of a sixth grade science class to teach database structure and search strategies for science using the AppleWorks program. Provides templates and questions for class and element databases. (Author/YP)
Vacuum status-display and sector-conditioning programs
NASA Astrophysics Data System (ADS)
Skelly, J.; Yen, S.
1990-08-01
Two programs have been developed for observation and control of the AGS vacuum system, which include the following notable features: (1) they incorporate a graphical user interface and (2) they are driven by a relational database which describes the vacuum system. The vacuum system comprises some 440 devices organized into 28 vacuum sectors. The status-display program invites menu selection of a sector, interrogates the relational database for relevant vacuum devices, acquires live readbacks and posts a graphical display of their status. The sector-conditioning program likewise invites sector selection, produces the same status display and also implements process control logic on the sector devices to pump the sector down from atmospheric pressure to high vacuum over a period extending several hours. As additional devices are installed in the vacuum system, the devices are added to the relational database; these programs then automatically include the new devices.
Why Save Your Course as a Relational Database?
ERIC Educational Resources Information Center
Hamilton, Gregory C.; Katz, David L.; Davis, James E.
2000-01-01
Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)
First Database Course--Keeping It All Organized
ERIC Educational Resources Information Center
Baugh, Jeanne M.
2015-01-01
All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…
75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... Facility Charge Database System for Air Carrier Reporting AGENCY: Federal Aviation Administration (FAA... the Passenger Facility Charge (PFC) database system to report PFC quarterly report information. In... developed a national PFC database system in order to more easily track the PFC program on a nationwide basis...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2014 CFR
2014-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
Two Student Self-Management Techniques Applied to Data-Based Program Modification.
ERIC Educational Resources Information Center
Wesson, Caren
Two student self-management techniques, student charting and student selection of instructional activities, were applied to ongoing data-based program modification. Forty-two elementary school resource room students were assigned randomly (within teacher) to one of three treatment conditions: Teacher Chart-Teacher Select Instructional Activities…
Computers and Library Management.
ERIC Educational Resources Information Center
Cooke, Deborah M.; And Others
1985-01-01
This five-article section discusses changes in the management of the school library resulting from use of the computer. Topics covered include data management programs (record keeping, word processing, and bibliographies); practical applications of a database; evaluation of "Circulation Plus" software; ergonomics and computers; and…
Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza
2003-01-01
Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.
NASA Astrophysics Data System (ADS)
Al-Mishwat, Ali T.
2016-05-01
PHASS99 is a FORTRAN program designed to retrieve and decode radiometric and other physical age information of igneous rocks contained in the international database IGBADAT (Igneous Base Data File). In the database, ages are stored in a proprietary format using mnemonic representations. The program can handle up to 99 ages in an igneous rock specimen and caters to forty radiometric age systems. The radiometric age alphanumeric strings assigned to each specimen description in the database consist of four components: the numeric age and its exponential modifier, a four-character mnemonic method identification, a two-character mnemonic name of analysed material, and the reference number in the rock group bibliography vector. For each specimen, the program searches for radiometric age strings, extracts them, parses them, decodes the different age components, and converts them to high-level English equivalents. IGBADAT and similarly-structured files are used for input. The output includes three files: a flat raw ASCII text file containing retrieved radiometric age information, a generic spreadsheet-compatible file for data import to spreadsheets, and an error file. PHASS99 builds on the old program TSTPHA (Test Physical Age) decoder program and expands greatly its capabilities. PHASS99 is simple, user friendly, fast, efficient, and does not require users to have knowledge of programing.
Therrell, Bradford L
2003-01-01
At birth, patient demographic and health information begin to accumulate in varied databases. There are often multiple sources of the same or similar data. New public health programs are often created without considering data linkages. Recently, newborn hearing screening (NHS) programs and immunization programs have virtually ignored the existence of newborn dried blood spot (DBS) newborn screening databases containing similar demographic data, creating data duplication in their 'new' systems. Some progressive public health departments are developing data warehouses of basic, recurrent patient information, and linking these databases to other health program databases where programs and services can benefit from such linkages. Demographic data warehousing saves time (and money) by eliminating duplicative data entry and reducing the chances of data errors. While newborn screening data are usually the first data available, they should not be the only data source considered for early data linkage or for populating a data warehouse. Birth certificate information should also be considered along with other data sources for infants that may not have received newborn screening or who may have been born outside of the jurisdiction and not have birth certificate information locally available. This newborn screening serial number provides a convenient identification number for use in the DBS program and for linking with other systems. As a minimum, data linkages should exist between newborn dried blood spot screening, newborn hearing screening, immunizations, birth certificates and birth defect registries.
International Space Station Utilization: Tracking Investigations from Objectives to Results
NASA Technical Reports Server (NTRS)
Ruttley, T. M.; Mayo, Susan; Robinson, J. A.
2011-01-01
Since the first module was assembled on the International Space Station (ISS), on-orbit investigations have been underway across all scientific disciplines. The facilities dedicated to research on ISS have supported over 1100 investigations from over 900 scientists representing over 60 countries. Relatively few of these investigations are tracked through the traditional NASA grants monitoring process and with ISS National Laboratory use growing, the ISS Program Scientist s Office has been tasked with tracking all ISS investigations from objectives to results. Detailed information regarding each investigation is now collected once, at the first point it is proposed for flight, and is kept in an online database that serves as a single source of information on the core objectives of each investigation. Different fields are used to provide the appropriate level of detail for research planning, astronaut training, and public communications. http://www.nasa.gov/iss-science/. With each successive year, publications of ISS scientific results, which are used to measure success of the research program, have shown steady increases in all scientific research areas on the ISS. Accurately identifying, collecting, and assessing the research results publications is a challenge and a priority for the ISS research program, and we will discuss the approaches that the ISS Program Science Office employs to meet this challenge. We will also address the online resources available to support outreach and communication of ISS research to the public. Keywords: International Space Station, Database, Tracking, Methods
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
IUE Data Analysis Software for Personal Computers
NASA Technical Reports Server (NTRS)
Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.
1996-01-01
This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.
Move Over, Word Processors--Here Come the Databases.
ERIC Educational Resources Information Center
Olds, Henry F., Jr.; Dickenson, Anne
1985-01-01
Discusses the use of beginning, intermediate, and advanced databases for instructional purposes. A table listing seven databases with information on ease of use, smoothness of operation, data capacity, speed, source, and program features is included. (JN)
Ocean Drilling Program: Janus Web Database
in Janus Data Types and Examples Leg 199, sunrise. Janus Web Database ODP and IODP data are stored in as time permits (see Database Overview for available data). Data are available to everyone. There are
CANCER PREVENTION AND CONTROL (CP) DATABASE
This database focuses on breast, cervical, skin, and colorectal cancer emphasizing the application of early detection and control program activities and risk reduction efforts. The database provides bibliographic citations and abstracts of various types of materials including jou...
Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C
2013-04-30
A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. Copyright © 2013 Wiley Periodicals, Inc.
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L
2006-11-01
Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.
DeAngelo, Jacob
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey.
Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program
Lomatch, Diane; Truax, Terry; Savage, Peter
1981-01-01
A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.
Research productivity and scholarly impact of APA-accredited school psychology programs: 2005-2009.
Kranzler, John H; Grapin, Sally L; Daley, Matt L
2011-12-01
This study examined the research productivity and scholarly impact of faculty in APA-accredited school psychology programs using data in the PsycINFO database from 2005 to 2009. We ranked doctoral programs on the basis of authorship credit, number of publications, and number of citations. In addition, we examined the primary publication outlets of school psychology program faculties and the major themes of research during this time period. We compared our results with those of a similar study that examined data from a decade earlier. Limitations and implications of this study are also discussed. Published by Elsevier Ltd.
Implementations of BLAST for parallel computers.
Jülich, A
1995-02-01
The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.
Activity computer program for calculating ion irradiation activation
NASA Astrophysics Data System (ADS)
Palmer, Ben; Connolly, Brian; Read, Mark
2017-07-01
A computer program, Activity, was developed to predict the activity and gamma lines of materials irradiated with an ion beam. It uses the TENDL (Koning and Rochman, 2012) [1] proton reaction cross section database, the Stopping and Range of Ions in Matter (SRIM) (Biersack et al., 2010) code, a Nuclear Data Services (NDS) radioactive decay database (Sonzogni, 2006) [2] and an ENDF gamma decay database (Herman and Chadwick, 2006) [3]. An extended version of Bateman's equation is used to calculate the activity at time t, and this equation is solved analytically, with the option to also solve by numeric inverse Laplace Transform as a failsafe. The program outputs the expected activity and gamma lines of the activated material.
Cell death proteomics database: consolidating proteomics data on cell death.
Arntzen, Magnus Ø; Bull, Vibeke H; Thiede, Bernd
2013-05-03
Programmed cell death is a ubiquitous process of utmost importance for the development and maintenance of multicellular organisms. More than 10 different types of programmed cell death forms have been discovered. Several proteomics analyses have been performed to gain insight in proteins involved in the different forms of programmed cell death. To consolidate these studies, we have developed the cell death proteomics (CDP) database, which comprehends data from apoptosis, autophagy, cytotoxic granule-mediated cell death, excitotoxicity, mitotic catastrophe, paraptosis, pyroptosis, and Wallerian degeneration. The CDP database is available as a web-based database to compare protein identifications and quantitative information across different experimental setups. The proteomics data of 73 publications were integrated and unified with protein annotations from UniProt-KB and gene ontology (GO). Currently, more than 6,500 records of more than 3,700 proteins are included in the CDP. Comparing apoptosis and autophagy using overrepresentation analysis of GO terms, the majority of enriched processes were found in both, but also some clear differences were perceived. Furthermore, the analysis revealed differences and similarities of the proteome between autophagosomal and overall autophagy. The CDP database represents a useful tool to consolidate data from proteome analyses of programmed cell death and is available at http://celldeathproteomics.uio.no.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-02
..., Proposed Collection: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library... general public. Information such as name, address, phone, email, Web site, staff size, program details... Museum Web Database: MuseumsCount.gov collection. The 60-day notice for the IMLS Museum Web Database...
Analysis of aggregates and binders used for the ODOT chip seal program.
DOT National Transportation Integrated Search
2010-11-30
This project compared the results of laboratory characterization of chip seal aggregate samples for Oklahoma DOT Divisions 1,2,3,5 and 6 with performance data from the Pavement Management System (PMS) database. Binder evaluation was limited to identi...
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
The Structural Ceramics Database: Technical Foundations
Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.
1989-01-01
The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397
Listing of Education in Archaeological Programs: The LEAP Clearinghouse, 1989-1989 Summary Report.
ERIC Educational Resources Information Center
Knoll, Patricia C., Ed.
This catalog incorporates information gathered between 1987 and 1989 for inclusion into the National Park Service's Listing of Education in Archaeological Programs (LEAP) computerized database. This database is a listing of federal, state, local and private projects promoting positive public awareness of U.S. archaeology--prehistoric and historic,…
Unified Database Development Program. Final Report.
ERIC Educational Resources Information Center
Thomas, Everett L., Jr.; Deem, Robert N.
The objective of the unified database (UDB) program was to develop an automated information system that would be useful in the design, development, testing, and support of new Air Force aircraft weapon systems. Primary emphasis was on the development of: (1) a historical logistics data repository system to provide convenient and timely access to…
Vapor Compression Cycle Design Program (CYCLE_D)
National Institute of Standards and Technology Data Gateway
SRD 49 NIST Vapor Compression Cycle Design Program (CYCLE_D) (PC database for purchase) The CYCLE_D database package simulates the vapor compression refrigeration cycles. It is fully compatible with REFPROP 9.0 and covers the 62 single-compound refrigerants . Fluids can be used in mixtures comprising up to five components.
ERIC Educational Resources Information Center
Noell, George H.
2005-01-01
Analyses were conducted replicating pilot work examining the feasibility of using the Louisiana's educational assessment data in concert with the Louisiana Educational Assessment Data System (LEADS) database and other associated databases to assess teacher preparation programs. The degree of matching across years and the degree of matching between…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yung, J; Stefan, W; Reeve, D
2015-06-15
Purpose: Phantom measurements allow for the performance of magnetic resonance (MR) systems to be evaluated. Association of Physicists in Medicine (AAPM) Report No. 100 Acceptance Testing and Quality Assurance Procedures for MR Imaging Facilities, American College of Radiology (ACR) MR Accreditation Program MR phantom testing, and ACR MRI quality control (QC) program documents help to outline specific tests for establishing system performance baselines as well as system stability over time. Analyzing and processing tests from multiple systems can be time-consuming for medical physicists. Besides determining whether tests are within predetermined limits or criteria, monitoring longitudinal trends can also help preventmore » costly downtime of systems during clinical operation. In this work, a semi-automated QC program was developed to analyze and record measurements in a database that allowed for easy access to historical data. Methods: Image analysis was performed on 27 different MR systems of 1.5T and 3.0T field strengths from GE and Siemens manufacturers. Recommended measurements involved the ACR MRI Accreditation Phantom, spherical homogenous phantoms, and a phantom with an uniform hole pattern. Measurements assessed geometric accuracy and linearity, position accuracy, image uniformity, signal, noise, ghosting, transmit gain, center frequency, and magnetic field drift. The program was designed with open source tools, employing Linux, Apache, MySQL database and Python programming language for the front and backend. Results: Processing time for each image is <2 seconds. Figures are produced to show regions of interests (ROIs) for analysis. Historical data can be reviewed to compare previous year data and to inspect for trends. Conclusion: A MRI quality assurance and QC program is necessary for maintaining high quality, ACR MRI Accredited MR programs. A reviewable database of phantom measurements assists medical physicists with processing and monitoring of large datasets. Longitudinal data can reveal trends that although are within passing criteria indicate underlying system issues.« less
The research infrastructure of Chinese foundations, a database for Chinese civil society studies
Ma, Ji; Wang, Qun; Dong, Chao; Li, Huafang
2017-01-01
This paper provides technical details and user guidance on the Research Infrastructure of Chinese Foundations (RICF), a database of Chinese foundations, civil society, and social development in general. The structure of the RICF is deliberately designed and normalized according to the Three Normal Forms. The database schema consists of three major themes: foundations’ basic organizational profile (i.e., basic profile, board member, supervisor, staff, and related party tables), program information (i.e., program information, major program, program relationship, and major recipient tables), and financial information (i.e., financial position, financial activities, cash flow, activity overview, and large donation tables). The RICF’s data quality can be measured by four criteria: data source reputation and credibility, completeness, accuracy, and timeliness. Data records are properly versioned, allowing verification and replication for research purposes. PMID:28742065
Construction of crystal structure prototype database: methods and applications.
Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming
2017-04-26
Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.
Construction of crystal structure prototype database: methods and applications
NASA Astrophysics Data System (ADS)
Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming
2017-04-01
Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.
Modification of infant hypothyroidism and phenylketonuria screening program using electronic tools.
Taheri, Behjat; Haddadpoor, Asefeh; Mirkhalafzadeh, Mahmood; Mazroei, Fariba; Aghdak, Pezhman; Nasri, Mehran; Bahrami, Gholamreza
2017-01-01
Congenital hypothyroidism and phenylketonuria (PKU) are the most common cause for preventable mental retardation in infants worldwide. Timely diagnosis and treatment of these disorders can have lasting effects on the mental development of newborns. However, there are several problems at different stages of screening programs that along with imposing heavy costs can reduce the precision of the screening, increasing the chance of undiagnosed cases which in turn can have damaging consequences for the society. Therefore, given these problems and the importance of information systems in facilitating the management and improving the quality of health care the aim of this study was to improve the screening process of hypothyroidism and PKU in infants with the help of electronic resources. The current study is a qualitative, action research designed to improve the quality of screening, services, performance, implementation effectiveness, and management of hypothyroidism and PKU screening program in Isfahan province. To this end, web-based software was designed. Programming was carried out using Delphi.net software and used SQL Server 2008 for database management. Given the weaknesses, problems, and limitations of hypothyroidism and PKU screening program, and the importance of these diseases in a national scale, this study resulted in design of hypothyroidism and PKU screening software for infants in Isfahan province. The inputs and outputs of the software were designed in three levels including Health Care Centers in charge of the screening program, provincial reference lab, and health and treatment network of Isfahan province. Immediate registration of sample data at the time and location of sampling, providing the provincial reference Laboratory and Health Centers of different eparchies with the ability to instantly observe, monitor, and follow-up on the samples at any moment, online verification of samples by reference lab, creating a daily schedule for reference lab, and receiving of the results from analysis equipment; and entering the results into the database without the need for user input are among the features of this software. The implementation of hypothyroidism screening software led to an increase in the quality and efficiency of the screening program; minimized the risk of human error in the process and solved many of the previous limitations of the screening program which were the main goals for implementation of this software. The implementation of this software also resulted in improvement in precision and quality of services provided for these two diseases and better accuracy and precision for data inputs by providing the possibility of entering the sample data at the place and time of sampling which then resulted in the possibility of management based on precise data and also helped develop a comprehensive database and improved the satisfaction of service recipients.
Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment
NASA Technical Reports Server (NTRS)
Evans, R. W.; Brinza, D. E.
2014-01-01
Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model-minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050 and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.
Flight-determined engine exhaust characteristics of an F404 engine in an F-18 airplane
NASA Technical Reports Server (NTRS)
Ennix, Kimberly A.; Burcham, Frank W., Jr.; Webb, Lannie D.
1993-01-01
Personnel at the NASA Langley Research Center (NASA-Langley) and the NASA Dryden Flight Research Facility (NASA-Dryden) recently completed a joint acoustic flight test program. Several types of aircraft with high nozzle pressure ratio engines were flown to satisfy a twofold objective. First, assessments were made of subsonic climb-to-cruise noise from flights conducted at varying altitudes in a Mach 0.30 to 0.90 range. Second, using data from flights conducted at constant altitude in a Mach 0.30 to 0.95 range, engineers obtained a high quality noise database. This database was desired to validate the Aircraft Noise Prediction Program and other system noise prediction codes. NASA-Dryden personnel analyzed the engine data from several aircraft that were flown in the test program to determine the exhaust characteristics. The analysis of the exhaust characteristics from the F-18 aircraft are reported. An overview of the flight test planning, instrumentation, test procedures, data analysis, engine modeling codes, and results are presented.
SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.
Wang, Chunlin; Lefkowitz, Elliot J
2004-10-28
Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.
Evaluation of the national tuberculosis surveillance program in Haiti
Salyer, S. J.; Fitter, D. L.; Milo, R.; Blanton, C.; Ho, J. L.; Geffrard, H.; Morose, W.; Marston, B. J.
2015-01-01
OBJECTIVE To assess the quality of tuberculosis (TB) surveillance in Haiti, including whether underreporting from facilities to the national level contributes to low national case registration. METHODS We collected 2010 and 2012 TB case totals, reviewed laboratory registries, and abstracted individual TB case reports from 32 of 263 anti-tuberculosis treatment facilities randomly selected after stratification/weighting toward higher-volume facilities. We compared site results to national databases maintained by a non-governmental organization partner (International Child Care [ICC]) for 2010 and 2012, and the National TB Program (Programme National de Lutte contre la Tuberculose, PNLT) for 2012 only. RESULTS Case registries were available at 30/32 facilities for 2010 and all 32 for 2012. Totals of 3711 (2010) and 4143 (2012) cases were reported at the facilities. Case totals per site were higher in site registries than in the national databases by 361 (9.7%) (ICC 2010), 28 (0.8%) (ICC 2012), and 31 (0.8%) cases (PNLT 2012). Of abstracted individual cases, respectively 11.8% and 6.8% were not recorded in national databases for 2010 (n = 323) and 2012 (n = 351). CONCLUSIONS The evaluation demonstrated an improvement in reporting registered TB cases to the PNLT in Haiti between 2010 and 2012. Further improvement in case notification will require enhanced case detection and diagnosis. PMID:26260822
Cricket: A Mapped, Persistent Object Store
NASA Technical Reports Server (NTRS)
Shekita, Eugene; Zwilling, Michael
1996-01-01
This paper describes Cricket, a new database storage system that is intended to be used as a platform for design environments and persistent programming languages. Cricket uses the memory management primitives of the Mach operating system to provide the abstraction of a shared, transactional single-level store that can be directly accessed by user applications. In this paper, we present the design and motivation for Cricket. We also present some initial performance results which show that, for its intended applications, Cricket can provide better performance than a general-purpose database storage system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Johnson, Gary E.; Borde, Amy B.
Pacific Northwest National Laboratory (PNNL) conducted this project for the U.S. Army Corps of Engineers, Portland District (Corps). The purpose of the project is to develop a geospatial, web-accessible database (called “Oncor”) for action effectiveness and related data from monitoring and research efforts for the Columbia Estuary Ecosystem Restoration Program (CEERP). The intent is for the Oncor database to enable synthesis and evaluation, the results of which can then be applied in subsequent CEERP decision-making. This is the first annual report in what is expected to be a 3- to 4-year project, which commenced on February 14, 2012.
High-energy physics software parallelization using database techniques
NASA Astrophysics Data System (ADS)
Argante, E.; van der Stok, P. D. V.; Willers, I.
1997-02-01
A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.
From LDEF to a national Space Environment and Effects (SEE) program: A natural progression
NASA Technical Reports Server (NTRS)
Bowles, David E.; Calloway, Robert L.; Funk, Joan G.; Kinard, William H.; Levine, Arlene S.
1995-01-01
As the LDEF program draws to a close, it leaves in place the fundamental building blocks for a Space Environment and Effects (SEE) program. Results from LDEF data analyses and investigations now form a substantial core of knowledge on the long term effects of the space environment on materials, system and structures. In addition, these investigations form the basic structure of a critically-needed SEE archive and database system. An agency-wide effort is required to capture all elements of a SEE program to provide a more comprehensive and focused approach to understanding the space environment, determining the best techniques for both flight and ground-based experimentation, updating the models which predict both the environments and those effects on subsystems and spacecraft, and, finally, ensuring that this multitudinous information is properly maintained, and inserted into spacecraft design programs. Many parts and pieces of a SEE program already exist at various locations to fulfill specific needs. The primary purpose of this program, under the direction of the Office of Advanced Concepts and Technology (OACT) in NASA Headquarters, is to take advantage of these parts; apply synergisms where possible; identify and when possible fill-in gaps; coordinate and advocate a comprehensive SEE program. The SEE program must coordinate and support the efforts of well-established technical communities wherein the bulk of the work will continue to be done. The SEE program will consist of a NASA-led SEE Steering Committee, consisting of government and industry users, with the responsibility for coordination between technology developers and NASA customers; and Technical Working Groups with primary responsibility for program technical content in response to user needs. The Technical Working Groups are as follows: Materials and Processes; Plasma and Fields; Ionizing Radiation; Meteoroids and Orbital Debris; Neutral External Contamination; Thermosphere, Thermal, and Solar Conditions; Electromagnetic Effects; Integrated Assessments and Databases. Specific technology development tasks will be solicited through a NASA Research Announcement to be released in May of 1994. The areas in which tasks are solicited include: (1) engineering environment definitions, (2) environments and effects design guidelines, (3) environments and effects assessment models and databases, and (4) flight/ground simulation/technology assessment data.
From LDEF to a national Space Environment and Effects (SEE) program: A natural progression
NASA Astrophysics Data System (ADS)
Bowles, David E.; Calloway, Robert L.; Funk, Joan G.; Kinard, William H.; Levine, Arlene S.
1995-02-01
As the LDEF program draws to a close, it leaves in place the fundamental building blocks for a Space Environment and Effects (SEE) program. Results from LDEF data analyses and investigations now form a substantial core of knowledge on the long term effects of the space environment on materials, system and structures. In addition, these investigations form the basic structure of a critically-needed SEE archive and database system. An agency-wide effort is required to capture all elements of a SEE program to provide a more comprehensive and focused approach to understanding the space environment, determining the best techniques for both flight and ground-based experimentation, updating the models which predict both the environments and those effects on subsystems and spacecraft, and, finally, ensuring that this multitudinous information is properly maintained, and inserted into spacecraft design programs. Many parts and pieces of a SEE program already exist at various locations to fulfill specific needs. The primary purpose of this program, under the direction of the Office of Advanced Concepts and Technology (OACT) in NASA Headquarters, is to take advantage of these parts; apply synergisms where possible; identify and when possible fill-in gaps; coordinate and advocate a comprehensive SEE program. The SEE program must coordinate and support the efforts of well-established technical communities wherein the bulk of the work will continue to be done. The SEE program will consist of a NASA-led SEE Steering Committee, consisting of government and industry users, with the responsibility for coordination between technology developers and NASA customers; and Technical Working Groups with primary responsibility for program technical content in response to user needs. The Technical Working Groups are as follows: Materials and Processes; Plasma and Fields; Ionizing Radiation; Meteoroids and Orbital Debris; Neutral External Contamination; Thermosphere, Thermal, and Solar Conditions; Electromagnetic Effects; Integrated Assessments and Databases. Specific technology development tasks will be solicited through a NASA Research Announcement to be released in May of 1994. The areas in which tasks are solicited include: (1) engineering environment definitions, (2) environments and effects design guidelines, (3) environments and effects assessment models and databases, and (4) flight/ground simulation/technology assessment data.
Charting a Path to Location Intelligence for STD Control.
Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce
2009-01-01
This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.
Cheng, Ching-Wu; Leu, Sou-Sen; Cheng, Ying-Mei; Wu, Tsung-Chih; Lin, Chen-Chung
2012-09-01
Construction accident research involves the systematic sorting, classification, and encoding of comprehensive databases of injuries and fatalities. The present study explores the causes and distribution of occupational accidents in the Taiwan construction industry by analyzing such a database using the data mining method known as classification and regression tree (CART). Utilizing a database of 1542 accident cases during the period 2000-2009, the study seeks to establish potential cause-and-effect relationships regarding serious occupational accidents in the industry. The results of this study show that the occurrence rules for falls and collapses in both public and private project construction industries serve as key factors to predict the occurrence of occupational injuries. The results of the study provide a framework for improving the safety practices and training programs that are essential to protecting construction workers from occasional or unexpected accidents. Copyright © 2011 Elsevier Ltd. All rights reserved.
National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.
Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R
2017-08-05
Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons
NASA Astrophysics Data System (ADS)
Maracle, B. K.; Schuster, P. F.
2008-12-01
The U.S. Geological Survey (USGS) recently concluded a five-year water quality study (2001-2005) of the Yukon River and its major tributaries. One component of the study was to establish a water quality baseline providing a frame of reference to assess changes in the basin that may result from climate change. As the study neared its conclusion, the USGS began to foster a relationship with the Yukon River Inter-Tribal Watershed Council (YRITWC). The YRITWC was in the process of building a steward-based Yukon River water quality program. Both the USGS and the YRITWC recognized the importance of collaboration resulting in mutual benefits. Through the guidance, expertise, and training provided by the USGS, YRITWC developed and implemented a basin-wide water quality program. The YRITWC program began in March, 2006 utilizing USGS protocols, techniques, and in-kind services. To date, more than 300 samplings and field measurements at more than 25 locations throughout the basin (twice the size of California) have been completed by more than 50 trained volunteers. The Yukon River Basin baseline water quality database has been extended from 5 to 8 years due to the efforts of the YRITWC-USGS collaboration. Basic field measurements include field pH, specific conductance, dissolved oxygen, and water temperature. Samples taken for laboratory analyses include major ions, dissolved organic carbon, greenhouse gases, nutrients, and stable isotopes of hydrogen and oxygen, and selected trace elements. Field replicates and blanks were introduced into the program in 2007 for quality assurance. Building toward a long-term dataset is critical to understanding the effects of climate change on river basins. Thus, relaying the importance of long-term water-quality databases is a main focus of the training workshops. Consistencies in data populations between the USGS 5-year database and the YRITWC 3-year database indicate protocols and procedures made a successful transition. This reflects the success of the YRITWC- USGS sponsored water-quality training workshops for water technicians representing more than 18 Tribal Councils and First Nations throughout the Yukon River Basin. The collaborative approach to outreach and education will be described along with discussion of future opportunities using this model.
Diet History Questionnaire: Database Utility Program
If you need to modify the standard nutrient database, a single nutrient value must be provided by gender and portion size. If you have modified the database to have fewer or greater demographic groups, nutrient values must be included for each group.
Ushijima, Masaru; Mashima, Tetsuo; Tomida, Akihiro; Dan, Shingo; Saito, Sakae; Furuno, Aki; Tsukahara, Satomi; Seimiya, Hiroyuki; Yamori, Takao; Matsuura, Masaaki
2013-03-01
Genome-wide transcriptional expression analysis is a powerful strategy for characterizing the biological activity of anticancer compounds. It is often instructive to identify gene sets involved in the activity of a given drug compound for comparison with different compounds. Currently, however, there is no comprehensive gene expression database and related application system that is; (i) specialized in anticancer agents; (ii) easy to use; and (iii) open to the public. To develop a public gene expression database of antitumor agents, we first examined gene expression profiles in human cancer cells after exposure to 35 compounds including 25 clinically used anticancer agents. Gene signatures were extracted that were classified as upregulated or downregulated after exposure to the drug. Hierarchical clustering showed that drugs with similar mechanisms of action, such as genotoxic drugs, were clustered. Connectivity map analysis further revealed that our gene signature data reflected modes of action of the respective agents. Together with the database, we developed analysis programs that calculate scores for ranking changes in gene expression and for searching statistically significant pathways from the Kyoto Encyclopedia of Genes and Genomes database in order to analyze the datasets more easily. Our database and the analysis programs are available online at our website (http://scads.jfcr.or.jp/db/cs/). Using these systems, we successfully showed that proteasome inhibitors are selectively classified as endoplasmic reticulum stress inducers and induce atypical endoplasmic reticulum stress. Thus, our public access database and related analysis programs constitute a set of efficient tools to evaluate the mode of action of novel compounds and identify promising anticancer lead compounds. © 2012 Japanese Cancer Association.
The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative
Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi
2016-01-01
Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods: We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293
Analysis and correction of Landsat 4 and 5 Thematic Mapper Sensor Data
NASA Technical Reports Server (NTRS)
Bernstein, R.; Hanson, W. A.
1985-01-01
Procedures for the correction and registration and registration of Landsat TM image data are examined. The registration of Landsat-4 TM images of San Francisco to Landsat-5 TM images of the San Francisco using the interactive geometric correction program and the cross-correlation technique is described. The geometric correction program and cross-correlation results are presented. The corrections of the TM data to a map reference and to a cartographic database are discussed; geometric and cartographic analyses are applied to the registration results.
NASA Technical Reports Server (NTRS)
Powers, Janet V.; Wallace-Robinson, Janice; Dickson, Katherine J.; Hess, Elizabeth
1992-01-01
A 10-year cumulative bibliography of publications resulting from research supported by the Cardiopulmonary Discipline of the Space Physiology and Countermeasures Program of NASA's Life Sciences Division is provided. Primary subjects included in this bibliography are Fluid Shifts, Cardiovascular Fitness, Cardiovascular Physiology, and Pulmonary Physiology. General physiology references are also included. Principal investigators whose research tasks resulted in publication are identified. Publications are identified by a record number corresponding with their entry in the Life Sciences Bibliographic Database, maintained at the George Washington University.
NASA Technical Reports Server (NTRS)
Reid, John; Egge, Robert; McAfee, Nancy
2000-01-01
This document summarizes the feedback gathered during the user-testing phase in the development of an electronic library application: the Aeronautics and Space Access Pages (ASAP). It first provides some historical background on the NASA Scientific and Technical Information (STI) program and its efforts to enhance the services it offers the aerospace community. Following a brief overview of the ASAP project, it reviews the results of an online user survey, and from the lessons learned therein, outlines direction for future development of the project.
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1988-01-01
The Rubber Airplane program, which combines two symbolic processing techniques with a component-based database of design knowledge, is proposed as a computer aid for conceptual design. Using object-oriented programming, programs are organized around the objects and behavior to be simulated, and using constraint propagation, declarative statements designate mathematical relationships among all the equation variables. It is found that the additional level of organizational structure resulting from the arrangement of the design information in terms of design components provides greater flexibility and convenience.
ISO Key Project: Exploring the Full Range of Quasar/Agn Properties
NASA Technical Reports Server (NTRS)
Wilkes, Belinda; Oliversen, Ronald J. (Technical Monitor)
2003-01-01
While most of the work on this program has been completed, as previously reported, the portion of the program dealing with the subtopic of ISO LWS data analysis and reduction for the LWS Extragalactic Science Team and its leader, Dr. Howard Smith, is still active. This program in fact continues to generate results, and newly available computer modeling has extended the value of the datasets. As a result the team requests a one-year no-cost extension to this program, through 31 December 2004. The essence of the proposal is to perform ISO spectroscopic studies, including data analysis and modeling, of star-formation regions using an ensemble of archival space-based data from the Infrared Space Observatory's Long Wavelength Spectrometer and Short Wavelength Spectrometer, but including as well some other spectroscopic databases. Four kinds of regions are considered in the studies: (1) disks around more evolved objects; (2) young, low or high mass pre-main sequence stars in star-formation regions; (3) star formation in external, bright IR galaxies; and (4) the galactic center. One prime focus of the program is the OH lines in the far infrared. The program has the following goals: 1) Refine the data analysis of ISO observations to obtain deeper and better SNR results on selected sources. The ISO data itself underwent 'pipeline 10' reductions in early 2001, and additional 'hands-on data reduction packages' were supplied by the ISO teams in 2001. The Fabry-Perot database is particularly sensitive to noise and slight calibration errors; 2) Model the atomic and molecular line shapes, in particular the OH lines, using revised Monte-Carlo techniques developed by the SWAS team at the Center for Astrophysics; 3) Attend scientific meetings and workshops; 4) Perform E&PO activities related to infrared astrophysics and/or spectroscopy.
Software support for Huntingtons disease research.
Conneally, P. M.; Gersting, J. M.; Gray, J. M.; Beidleman, K.; Wexler, N. S.; Smith, C. L.
1991-01-01
Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data. PMID:1839672
Generation of the Ares I-X Flight Test Vehicle Aerodynamic Data Book and Comparison To Flight
NASA Technical Reports Server (NTRS)
Bauer, Steven X.; Krist, Steven E.; Compton, William B.
2011-01-01
A 3.5-year effort to characterize the aerodynamic behavior of the Ares I-X Flight Test Vehicle (AIX FTV) is described in this paper. The AIX FTV was designed to be representative of the Ares I Crew Launch Vehicle (CLV). While there are several differences in the outer mold line from the current revision of the CLV, the overall length, mass distribution, and flight systems of the two vehicles are very similar. This paper briefly touches on each of the aerodynamic databases developed in the program, describing the methodology employed, experimental and computational contributions to the generation of the databases, and how well the databases and underlying computations compare to actual flight test results.
ERIC Educational Resources Information Center
Irwin, Gretchen; Wessel, Lark; Blackman, Harvey
2012-01-01
This case describes a database redesign project for the United States Department of Agriculture's National Animal Germplasm Program (NAGP). The case provides a valuable context for teaching and practicing database analysis, design, and implementation skills, and can be used as the basis for a semester-long team project. The case demonstrates the…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... identifying individual criminal offenders and alleged offenders and consisting only of identifying data and... to 5 U.S.C. 552a(j)(2): DO .220--SIGTARP Hotline Database. DO .221--SIGTARP Correspondence Database. DO .222--SIGTARP Investigative MIS Database. DO .223--SIGTARP Investigative Files Database. DO .224...
The Design and Analysis of a Network Interface for the Multi-Lingual Database System.
1985-12-01
IDENTIF:CATION NUMBER 0 ORGANIZATION (If applicable) 8c. ADDRESS (City, State. and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT...APPFNlDIX - THE~ KMS PROGRAM SPECIFICATI~bS ........ 94 I4 XST O)F REFEFRENCFS*O*IOebqBS~*OBS 124 Il LIST OF FIrURPS F’igure 1: The multi-Linqual Database...bacKend Database System *CABO0S). In this section, we Provide an overviev of Doti tne MLLS an tne 4B0S to enhance the readers understandin- of the
Technical implementation of an Internet address database with online maintenance module.
Mischke, K L; Bollmann, F; Ehmer, U
2002-01-01
The article describes the technical implementation and management of the Internet address database of the center for ZMK (University of Münster, Dental School) Münster, which is integrated in the "ZMK-Web" website. The editorially maintained system guarantees its topicality primarily due to the electronically organized division of work with the aid of an online maintenance module programmed in JavaScript/PHP, as well as a database-related feedback function for the visitor to the website through configuration-independent direct mail windows programmed in JavaScript/PHP.
Pan Air Geometry Management System (PAGMS): A data-base management system for PAN AIR geometry data
NASA Technical Reports Server (NTRS)
Hall, J. F.
1981-01-01
A data-base management system called PAGMS was developed to facilitate the data transfer in applications computer programs that create, modify, plot or otherwise manipulate PAN AIR type geometry data in preparation for input to the PAN AIR system of computer programs. PAGMS is composed of a series of FORTRAN callable subroutines which can be accessed directly from applications programs. Currently only a NOS version of PAGMS has been developed.
Ocean Drilling Program: Privacy Policy
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site ODP/TAMU Science Operator Home Ocean Drilling Program Privacy Policy The following is the privacy policy for the www-odp.tamu.edu web site. 1. Cookies are used in the Database portion of the web
Design, Development, and Maintenance of the GLOBE Program Website and Database
NASA Technical Reports Server (NTRS)
Brummer, Renate; Matsumoto, Clifford
2004-01-01
This is a 1-year (Fy 03) proposal to design and develop enhancements, implement improved efficiency and reliability, and provide responsive maintenance for the operational GLOBE (Global Learning and Observations to Benefit the Environment) Program website and database. This proposal is renewable, with a 5% annual inflation factor providing an approximate cost for the out years.
Listing of Education in Archaeological Programs: The LEAP Clearinghouse 1990-1991 Summary Report.
ERIC Educational Resources Information Center
Knoll, Patricia C., Ed.
This is the second catalog of the National Park Service's Listing of Education in Archaeological Programs (LEAP). It consists of the information incorporated into the LEAP computerized database between 1990 and 1991. The database is a listing of federal, state, local, and private projects promoting public awareness of U.S. archaeology including…
USDA-ARS?s Scientific Manuscript database
For nearly 20 years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture’s (USDA) food composition databases through the collection and analysis of nationally representative food samples. This manuscript d...
Database Application for a Youth Market Livestock Production Education Program
ERIC Educational Resources Information Center
Horney, Marc R.
2013-01-01
This article offers an example of a database designed to support teaching animal production and husbandry skills in county youth livestock programs. The system was used to manage production goals, animal growth and carcass data, photos and other imagery, and participant records. These were used to produce a variety of customized reports to help…
The Internet as a communication tool for orthopedic spine fellowships in the United States.
Silvestre, Jason; Guzman, Javier Z; Skovrlj, Branko; Overley, Samuel C; Cho, Samuel K; Qureshi, Sheeraz A; Hecht, Andrew C
2015-04-01
Orthopedic residents seeking additional training in spine surgery commonly use the Internet to manage their fellowship applications. Although studies have assessed the accessibility and content of Web sites in other medical specialties, none have looked at orthopedic spine fellowship Web sites (SFWs). The purpose of this study was to evaluate the accessibility of information from commonly used databases and assess the content of SFWs. This was a Web site accessibility and content evaluation study. A comprehensive list of available orthopedic spine fellowship programs was compiled by accessing program lists from the SF Match, North American Spine Society, Fellowship and Residency Electronic Interactive Database (FREIDA), and Orthopaedicsone.com (Ortho1). These databases were assessed for accessibility of information including viable links to SFWs and responsive program contacts. A Google search was used to identify SFWs not readily available on these national databases. SFWs were evaluated based on online education and recruitment content. Evaluators found 45 SFWs of 63 active programs (71%). Available SFWs were often not readily accessible from national program lists, and no program afforded a direct link to their SFW from SF Match. Approximately half of all programs responded via e-mail. Although many programs described surgical experience (91%) and research requirements (87%) during the fellowship, less than half mentioned didactic instruction (46%), journal clubs (41%), and national meetings or courses attended (28%). Evaluators found an average 45% of fellow recruitment content. Comparison of SFWs by program characteristics revealed three significant differences. Programs with greater than one fellowship position had greater online education content than programs with a single fellow (p=.022). Spine fellowships affiliated with an orthopedic residency program maintained greater education (p=.006) and recruitment (p=.046) content on their SFWs. Most orthopedic spine surgery programs underuse the Internet for fellow education and recruitment. The inaccessibility of information and paucity of content on SFWs allow for future opportunity to optimize these resources. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Saltsman, James F.
1992-01-01
This manual presents computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of Strainrange Partitioning (TS-SRP). An extensive database has also been developed in a parallel effort. This database is probably the largest source of high-temperature, creep-fatigue test data available in the public domain and can be used with other life prediction methods as well. This users manual, software, and database are all in the public domain and are available through COSMIC (382 East Broad Street, Athens, GA 30602; (404) 542-3265, FAX (404) 542-4807). Two disks accompany this manual. The first disk contains the source code, executable files, and sample output from these programs. The second disk contains the creep-fatigue data in a format compatible with these programs.
C3I system modification and EMC (electromagnetic compatibility) methodology, volume 1
NASA Astrophysics Data System (ADS)
Wilson, J. L.; Jolly, M. B.
1984-01-01
A methodology (i.e., consistent set of procedures) for assessing the electromagnetic compatibility (EMC) of RF subsystem modifications on C3I aircraft was generated during this study (Volume 1). An IEMCAP (Intrasystem Electromagnetic Compatibility Analysis Program) database for the E-3A (AWACS) C3I aircraft RF subsystem was extracted to support the design of the EMC assessment methodology (Volume 2). Mock modifications were performed on the E-3A database to assess, using a preliminary form of the methodology, the resulting EMC impact. Application of the preliminary assessment methodology to modifications in the E-3A database served to fine tune the form of a final assessment methodology. The resulting final assessment methodology is documented in this report in conjunction with the overall study goals, procedures, and database. It is recommended that a similar EMC assessment methodology be developed for the power subsystem within C3I aircraft. It is further recommended that future EMC assessment methodologies be developed around expert systems (i.e., computer intelligent agents) to control both the excruciating detail and user requirement for transparency.
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
2005-01-01
This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.
Pesticides in Drinking Water – The Brazilian Monitoring Program
Barbosa, Auria M. C.; Solano, Marize de L. M.; Umbuzeiro, Gisela de A.
2015-01-01
Brazil is the world largest pesticide consumer; therefore, it is important to monitor the levels of these chemicals in the water used by population. The Ministry of Health coordinates the National Drinking Water Quality Surveillance Program (Vigiagua) with the objective to monitor water quality. Water quality data are introduced in the program by state and municipal health secretariats using a database called Sisagua (Information System of Water Quality Monitoring). Brazilian drinking water norm (Ordinance 2914/2011 from Ministry of Health) includes 27 pesticide active ingredients that need to be monitored every 6 months. This number represents <10% of current active ingredients approved for use in the country. In this work, we analyzed data compiled in Sisagua database in a qualitative and quantitative way. From 2007 to 2010, approximately 169,000 pesticide analytical results were prepared and evaluated, although approximately 980,000 would be expected if all municipalities registered their analyses. This shows that only 9–17% of municipalities registered their data in Sisagua. In this dataset, we observed non-compliance with the minimum sampling number required by the norm, lack of information about detection and quantification limits, insufficient standardization in expression of results, and several inconsistencies, leading to low credibility of pesticide data provided by the system. Therefore, it is not possible to evaluate exposure of total Brazilian population to pesticides via drinking water using the current national database system Sisagua. Lessons learned from this study could provide insights into the monitoring and reporting of pesticide residues in drinking water worldwide. PMID:26581345
Airframe Noise Sub-Component Definition and Model
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Sen, Rahul; Hardy, Bruce; Yamamoto, Kingo; Guo, Yue-Ping; Miller, Gregory
2004-01-01
Both in-house, and jointly with NASA under the Advanced Subsonic Transport (AST) program, Boeing Commerical Aircraft Company (BCA) had begun work on systematically identifying specific components of noise responsible for total airframe noise generation and applying the knowledge gained towards the creation of a model for airframe noise prediction. This report documents the continuation of the collection of database from model-scale and full-scale airframe noise measurements to compliment the earlier existing databases, the development of the subcomponent models and the generation of a new empirical prediction code. The airframe subcomponent data includes measurements from aircraft ranging in size from a Boeing 737 to aircraft larger than a Boeing 747 aircraft. These results provide the continuity to evaluate the technology developed under the AST program consistent with the guidelines set forth in NASA CR-198298.
Ionospheric characteristics for archiving at the World Data Centers. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamache, R.R.; Reinisch, B.W.
1990-12-01
A database structure for archiving ionospheric characteristics at uneven data rates was developed at the July 1989 Ionospheric Informatics Working Group (IIWG) Lowell Workshop in Digital Ionogram Data Formats for World Data Center Archiving. This structure is proposed as a new URSI standard and is being employed by the World Data Center A for solar terrestrial physics for archiving characteristics. Here the database has been slightly refined for the application and programs written to generate these database files using as input Digisonde 256 ARTIST data, post processed by the ULCAR ADEP (ARTIST Data Editing Program) system. The characteristics program asmore » well as supplemental programs developed for this task are described here. The new software will make it possible to archive the ionospheric characteristics from the Geophysics Laboratory high latitude Digisonde network, the AWS DISS and the international Digisonde networks, and other ionospheric sounding networks.« less
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
Results from a new die-to-database reticle inspection platform
NASA Astrophysics Data System (ADS)
Broadbent, William; Xiong, Yalin; Giusti, Michael; Walsh, Robert; Dayal, Aditya
2007-03-01
A new die-to-database high-resolution reticle defect inspection system has been developed for the 45nm logic node and extendable to the 32nm node (also the comparable memory nodes). These nodes will use predominantly 193nm immersion lithography although EUV may also be used. According to recent surveys, the predominant reticle types for the 45nm node are 6% simple tri-tone and COG. Other advanced reticle types may also be used for these nodes including: dark field alternating, Mask Enhancer, complex tri-tone, high transmission, CPL, EUV, etc. Finally, aggressive model based OPC will typically be used which will include many small structures such as jogs, serifs, and SRAF (sub-resolution assist features) with accompanying very small gaps between adjacent structures. The current generation of inspection systems is inadequate to meet these requirements. The architecture and performance of a new die-to-database inspection system is described. This new system is designed to inspect the aforementioned reticle types in die-to-database and die-to-die modes. Recent results from internal testing of the prototype systems are shown. The results include standard programmed defect test reticles and advanced 45nm and 32nm node reticles from industry sources. The results show high sensitivity and low false detections being achieved.
A database application for wilderness character monitoring
Ashley Adams; Peter Landres; Simon Kingston
2012-01-01
The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...
NASA Astrophysics Data System (ADS)
Ho, Chris M. W.; Marshall, Garland R.
1993-12-01
SPLICE is a program that processes partial query solutions retrieved from 3D, structural databases to generate novel, aggregate ligands. It is designed to interface with the database searching program FOUNDATION, which retrieves fragments containing any combination of a user-specified minimum number of matching query elements. SPLICE eliminates aspects of structures that are physically incapable of binding within the active site. Then, a systematic rule-based procedure is performed upon the remaining fragments to ensure receptor complementarity. All modifications are automated and remain transparent to the user. Ligands are then assembled by linking components into composite structures through overlapping bonds. As a control experiment, FOUNDATION and SPLICE were used to reconstruct a know HIV-1 protease inhibitor after it had been fragmented, reoriented, and added to a sham database of fifty different small molecules. To illustrate the capabilities of this program, a 3D search query containing the pharmacophoric elements of an aspartic proteinase-inhibitor crystal complex was searched using FOUNDATION against a subset of the Cambridge Structural Database. One hundred thirty-one compounds were retrieved, each containing any combination of at least four query elements. Compounds were automatically screened and edited for receptor complementarity. Numerous combinations of fragments were discovered that could be linked to form novel structures, containing a greater number of pharmacophoric elements than any single retrieved fragment.
Parenting Interventions for Indigenous Child Psychosocial Functioning: A Scoping Review
ERIC Educational Resources Information Center
Macvean, Michelle; Shlonsky, Aron; Mildon, Robyn; Devine, Ben
2017-01-01
Objectives: To scope evaluations of Indigenous parenting programs designed to improve child psychosocial outcomes. Methods: Electronic databases, gray literature, Indigenous websites and journals, and reference lists were searched. The search was restricted to high-income countries with a history of colonialism. Results: Sixteen studies describing…
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
NASA Technical Reports Server (NTRS)
Hiltner, Dale W.
2000-01-01
The TAILSIM program uses a 4th order Runge-Kutta method to integrate the standard aircraft equations-of-motion (EOM). The EOM determine three translational and three rotational accelerations about the aircraft's body axis reference system. The forces and moments that drive the EOM are determined from aerodynamic coefficients, dynamic derivatives, and control inputs. Values for these terms are determined from linear interpolation of tables that are a function of parameters such as angle-of-attack and surface deflections. Buildup equations combine these terms and dimensionalize them to generate the driving total forces and moments. Features that make TAILSIM applicable to studies of tailplane stall include modeling of the reversible control System, modeling of the pilot performing a load factor and/or airspeed command task, and modeling of vertical gusts. The reversible control system dynamics can be described as two hinged masses connected by a spring. resulting in a fifth order system. The pilot model is a standard form of lead-lag with a time delay applied to an integrated pitch rate and/or airspeed error feedback. The time delay is implemented by a Pade approximation, while the commanded pitch rate is determined by a commanded load factor. Vertical gust inputs include a single 1-cosine gust and a continuous NASA Dryden gust model. These dynamic models. coupled with the use of a nonlinear database, allow the tailplane stall characteristics, elevator response, and resulting aircraft response, to be modeled. A useful output capability of the TAILSIM program is the ability to display multiple post-run plot pages to allow a quick assessment of the time history response. There are 16 plot pages currently available to the user. Each plot page displays 9 parameters. Each parameter can also be displayed individually. on a one plot-per-page format. For a more refined display of the results the program can also create files of tabulated data. which can then be used by other plotting programs. The TAILSIM program was written straightforwardly assuming the user would want to change the database tables, the buildup equations, the output parameters. and the pilot model parameters. A separate database file and input file are automatically read in by the program. The use of an include file to set up all common blocks facilitates easy changing of parameter names and array sizes.
Neurosurgery Residency Websites: A Critical Evaluation.
Skovrlj, Branko; Silvestre, Jason; Ibeh, Chinwe; Abbatematteo, Joseph M; Mocco, J
2015-09-01
To evaluate the accessibility of educational and recruitment content of Neurosurgery Residency Websites (NRWs). Program lists from the Fellowship and Residency Electronic Interactive Database (FREIDA), Electronic Residency Application Service (ERAS), and the American Association of Neurological Surgeons (AANS) were accessed for the 2015 Match. These databases were assessed for accessibility of information and responsive program contacts. Presence of online recruitment and education variables was assessed, and correlations between program characteristics and website comprehensiveness were made. All 103 neurosurgery residency programs had an NRW. The AANS database provided the most number of viable website links with 65 (63%). No links existed for 5 (5%) programs. A minority of programs contacts responded via e-mail (46%). A minority of recruitment (46%) and educational (49%) variables were available on the NRWs. Larger programs, as defined by the number of yearly residency spots and clinical faculty, maintained greater online content than smaller programs. Similar trends were seen with programs affiliated with a ranked medical school and hospital. Multiple prior studies have demonstrated that medical students applying to neurosurgery rely heavily on residency program websites. As such, the paucity of content on NRWs allows for future opportunity to optimize online resources for neurosurgery training. Making sure that individual programs provide relevant content, make the content easier to find and adhere to established web design principles could increase the usability of NRWs. Copyright © 2015 Elsevier Inc. All rights reserved.
Evaluating Dermatology Residency Program Websites.
Ashack, Kurt A; Burton, Kyle A; Soh, Jonathan M; Lanoue, Julien; Boyd, Anne H; Milford, Emily E; Dunnick, Cory; Dellavalle, Robert P
2016-03-16
Internet resources play an important role in how medical students access information related to residency programs.Evaluating program websites is necessary in order to provide accurate information for applicants and provide information regarding areas of website improvement for programs. To date, dermatology residency websites (D WS) have not been evaluated.This paper evaluates dermatology residency websites based on availability of predefined measures. Using the FREIDA (Fellowship and Residency Electronic Interactive Database) Online database, authors searched forall accredited dermatology program websites. Eligible programs were identified through the FREIDA Online database and had a functioning website. Two authors independently extracted data with consensus or third researcher resolution of differences. This data was accessed and archived from July 15th to July 17th, 2015.Primary outcomes measured were presence of content on education, resident and faculty information, program environment, applicant recruitment, schedule, salary, and website quality evaluated using an online tool (WooRank.com). Out of 117 accredited dermatology residencies, 115 had functioning webpages. Of these, 76.5% (75) had direct links found on the FRIEDA Online database. Most programs contained information on education, faculty, program environment, and applicant recruitment. However, website quality and marketing effectiveness were highly variable; most programs were deemed to need improvements in the functioning of their webpages. Also, additional information on current residents and about potential away rotations were lacking from most websites with only 52.2% (60) and 41.7% (48) of programs providing this content, respectively. A majority of dermatology residency websites contained adequate information on many of the factors we evaluated. However, many were lacking in areas that matter to applicants. We hope this report will encourage dermatology residencyprograms to improve their websites and provide adequate content to attract the top residents for their respective programs.
Kleinman, Steven; Busch, Michael P; Murphy, Edward L; Shan, Hua; Ness, Paul; Glynn, Simone A.
2014-01-01
Background The Recipient Epidemiology and Donor Evaluation Study -III (REDS-III) is a 7-year multicenter transfusion safety research initiative launched in 2011 by the National Heart, Lung, and Blood Institute. Study design The domestic component involves 4 blood centers, 12 hospitals, a data coordinating center, and a central laboratory. The international component consists of distinct programs in Brazil, China, and South Africa which involve US and in-country investigators. Results REDS-III is using two major methods to address key research priorities in blood banking/transfusion medicine. First, there will be numerous analyses of large “core” databases; the international programs have each constructed a donor/donation database while the domestic program has established a detailed research database that links data from blood donors and their donations, the components made from these donations, and data extracts from the electronic medical records of the recipients of these components. Secondly, there are more than 25 focused research protocols involving transfusion recipients, blood donors, or both that are either in progress or scheduled to begin within the next 3 years. Areas of study include transfusion epidemiology and blood utilization; transfusion outcomes; non-infectious transfusion risks; HIV-related safety issues (particularly in the international programs); emerging infectious agents; blood component quality; donor health and safety; and other donor issues. Conclusions It is intended that REDS-III serve as an impetus for more widespread recipient and linked donor-recipient research in the US as well as to help assure a safe and available blood supply in the US and in international locations. PMID:24188564
The National Nonindigenous Aquatic Species Database
Neilson, Matthew E.; Fuller, Pamela L.
2012-01-01
The U.S. Geological Survey (USGS) Nonindigenous Aquatic Species (NAS) Program maintains a database that monitors, records, and analyzes sightings of nonindigenous aquatic plant and animal species throughout the United States. The program is based at the USGS Wetland and Aquatic Research Center in Gainesville, Florida.The initiative to maintain scientific information on nationwide occurrences of nonindigenous aquatic species began with the Aquatic Nuisance Species Task Force, created by Congress in 1990 to provide timely information to natural resource managers. Since then, the NAS database has been a clearinghouse of information for confirmed sightings of nonindigenous, also known as nonnative, aquatic species throughout the Nation. The database is used to produce email alerts, maps, summary graphs, publications, and other information products to support natural resource managers.
Cenozoic Antarctic DiatomWare/BugCam: An aid for research and teaching
Wise, S.W.; Olney, M.; Covington, J.M.; Egerton, V.M.; Jiang, S.; Ramdeen, D.K.; ,; Schrader, H.; Sims, P.A.; Wood, A.S.; Davis, A.; Davenport, D.R.; Doepler, N.; Falcon, W.; Lopez, C.; Pressley, T.; Swedberg, O.L.; Harwood, D.M.
2007-01-01
Cenozoic Antarctic DiatomWare/BugCam© is an interactive, icon-driven digital-image database/software package that displays over 500 illustrated Cenozoic Antarctic diatom taxa along with original descriptions (including over 100 generic and 20 family-group descriptions). This digital catalog is designed primarily for use by micropaleontologists working in the field (at sea or on the Antarctic continent) where hard-copy literature resources are limited. This new package will also be useful for classroom/lab teaching as well as for any paleontologists making or refining taxonomic identifications at the microscope. The database (Cenozoic Antarctic DiatomWare) is displayed via a custom software program (BugCam) written in Visual Basic for use on PCs running Windows 95 or later operating systems. BugCam is a flexible image display program that utilizes an intuitive thumbnail “tree” structure for navigation through the database. The data are stored on Micrsosoft EXCEL spread sheets, hence no separate relational database program is necessary to run the package
Geotherm: the U.S. geological survey geothermal information system
Bliss, J.D.; Rapport, A.
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey. Information in the system is available to the public on request. ?? 1983.
Adams, Bruce D; Whitlock, Warren L
2004-04-01
In 1997, The American Heart Association in association with representatives of the International Committee on Resuscitation (ILCOR) published recommended guidelines for reviewing, reporting and conducting in-hospital cardiopulmonary resuscitation (CPR) outcomes using the "Utstein style". Using these guidelines, we developed two Microsoft Office based database management programs that may be useful to the resuscitation community. We developed a user-friendly spreadsheet based on MS Office Excel. The user enters patient variables such as name, age, and diagnosis. Then, event resuscitation variables such as time of collapse and CPR team arrival are entered from a "code flow sheet". Finally, outcome variables such as patient condition at different time points are recorded. The program then makes automatic calculations of average response times, survival rates and other important outcome measurements. Also using the Utstein style, we developed a database program based on MS Office Access. To promote free public access to these programs, we established at a website. These programs will help hospitals track, analyze, and present their CPR outcomes data. Clinical CPR researchers might also find the programs useful because they are easily modified and have statistical functions.
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
ERIC Educational Resources Information Center
Butler, E. Sonny
Much of what librarians do today requires adeptness in creating and manipulating databases. Many new computers bought by libraries every year come packaged with Microsoft Office and include Microsoft Access. This database program features a seamless interface between Microsoft Office's other programs like Word, Excel, and PowerPoint. This book…
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Database Software for the 1990s.
ERIC Educational Resources Information Center
Beiser, Karl
1990-01-01
Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa
The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.
Urban Neighborhood Information Systems: Crime Prevention and Control Applications.
ERIC Educational Resources Information Center
Pattavina, April; Pierce, Glenn; Saiz, Alan
2002-01-01
Chronicles the need for and development of an interdisciplinary, integrated neighborhood-level database for Boston, Massachusetts, discussing database content and potential applications of this database to a range of criminal justice problems and initiatives (e.g., neighborhood crime patterns, needs assessment, and program planning and…
Usability evaluation of user interface of thesis title review system
NASA Astrophysics Data System (ADS)
Tri, Y.; Erna, A.; Gellysa, U.
2018-03-01
Presentation of programs with user interface that can be accessed online through the website of course greatly provide user benefits. User can easily access the program they need. There are usability values that serve as a benchmark for the success of a user accessible program, ie efficiency, effectiveness, and convenience. These usability values also determine the development of the program for the better use. Therefore, on the review title thesis program that will be implemented in STT Dumai was measured usability evaluation. It aims to see which sides are not yet perfect and need to be improved to improve the performance and utilization of the program. Usability evaluation was measured by using smartPLS software. Database used was the result of respondent questionnaires that include questions about the experience when they used program. The result of a review of thesis title program implemented in STT Dumai has an efficiency value of 22.615, the effectiveness of 20.612, and satisfaction of 33.177.
AphasiaBank: a resource for clinicians.
Forbes, Margaret M; Fromm, Davida; Macwhinney, Brian
2012-08-01
AphasiaBank is a shared, multimedia database containing videos and transcriptions of ~180 aphasic individuals and 140 nonaphasic controls performing a uniform set of discourse tasks. The language in the videos is transcribed in Codes for the Human Analysis of Transcripts (CHAT) format and coded for analysis with Computerized Language ANalysis (CLAN) programs, which can perform a wide variety of language analyses. The database and the CLAN programs are freely available to aphasia researchers and clinicians for educational, clinical, and scholarly uses. This article describes the database, suggests some ways in which clinicians and clinician researchers might find these materials useful, and introduces a new language analysis program, EVAL, designed to streamline the transcription and coding processes, while still producing an extensive and useful language profile. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Online access to international aerospace science and technology
NASA Technical Reports Server (NTRS)
Lahr, Thomas F.; Harrison, Laurie K.
1993-01-01
The NASA Aerospace Database contains over 625,000 foreign R&D documents from 1962 to the present from over 60 countries worldwide. In 1991 over 26,000 new non-U.S. entries were added from a variety of innovative exchange programs. An active international acquisitions effort by the NASA STI Program seeks to increase the percentage of foreign data in the coming years, focusing on Japan, the Commonwealth of Independent States, Western Europe, Australia, and Canada. It also has plans to target China, India, Brazil, and Eastern Europe in the future. The authors detail the resources the NASA Aerospace Database offers in the international arena, the methods used to gather this information, and the STI Program's initiatives for maintaining and expanding the percentage of international information in this database.
Bindawas, Saad M; Vennu, Vishal; Moftah, Emad
2017-01-01
to examine the effects of inpatient rehabilitation programs on function and length of stay in older adults with strokeMETHODS: A total of five electronic databases were searched for relevant randomized controlled trials that examined the effects of inpatient rehabilitation programs on functional recovery, as measured by the functional independence measure and length of stay, which was measured in days. We included full-text articles written in English, and no time limit. The methodological quality and risk of bias were assessed using the Physiotherapy Evidence Database Scale and the Cochrane collaboration tools respectively. The effect sizes and confidence intervals were estimated using fixed-effect modelsRESULTS: Eight randomized controlled trials involving 1,910 patients with stroke were included in the meta-analysis showed that patients who participated in the inpatient rehabilitation programs had significantly (p less than 0.05) higher functional independence measure scores (effect size = 0.10; 95 percent confidence interval = 0.01, 0.22) and shorter length of stay (effect size = 0.14; 95 percent confidence interval = 0.03, 0.22). This systematic review provided evidence that inpatient rehabilitation programs have beneficial effects, improving functionality and reducing length of stay for older adults with stroke.
Care plan program reduces the number of visits for challenging psychiatric patients in the ED.
Abello, Arthur; Brieger, Ben; Dear, Kim; King, Ben; Ziebell, Chris; Ahmed, Atheer; Milling, Truman J
2012-09-01
A small number of patients representing a significant demand on emergency department (ED) services present regularly for a variety of reasons, including psychiatric or behavioral complaints and lack of access to other services. A care plan program was created as a database of ED high users and patients of concern, as identified by ED staff and approved by program administrators to improve care and mitigate ED strain. A list of medical record numbers was assembled by searching the care plan program database for adult patients initially enrolled between the dates of November 1, 2006, and October 21, 2007. Inclusion criteria were the occurrence of a psychiatric International Classification Diseases, Ninth Revision, code in their medical record and a care plan level implying a serious psychiatric disorder causing harmful behavior. Additional data about these patients were acquired using an indigent care tracking database and electronic medical records. Variables collected from these sources were analyzed for changes before and after program enrollment. Of 501 patients in the database in the period studied, 48 patients fulfilled the criteria for the cohort. There was a significant reduction in the number of visits to the ED from the year before program enrollment to the year after enrollment (8.9, before; 5.9, after; P < .05). There was also an increase in psychiatric hospital visits (2%, before; 25%, after; P < .05). An alert program that identifies challenging ED patients with psychiatric conditions and creates a care plan appears to reduce visits and lead to more appropriate use of other resources. Copyright © 2012 Elsevier Inc. All rights reserved.
Contagion and Repeat Offending among Urban Juvenile Delinquents
ERIC Educational Resources Information Center
Mennis, Jeremy; Harris, Philip
2011-01-01
This research investigates the role of repeat offending and spatial contagion in juvenile delinquency recidivism using a database of 7166 male juvenile offenders sent to community-based programs by the Family Court of Philadelphia. Results indicate evidence of repeat offending among juvenile delinquents, particularly for drug offenders. The…
[Food and nutrition security policy in Brazil: an analysis of resource allocation].
Custódio, Marta Battaglia; Yuba, Tânia Yuka; Cyrillo, Denise Cavallini
2013-02-01
To describe the progression and distribution of federal funds for programs and activities that fall within the scope of the guidelines of the Brazilian National Policy on Food and Nutrition Security (PNSAN) in the period from 2004 to 2010. This descriptive study used data from the Transparency Website maintained by the Brazilian Public Sector Internal Control Office. Search results were exported to Excel spreadsheets. To determine the resources allocated to food security initiatives, a database was set up containing all actions developed by the federal government between 2004 and 2010. This database was reviewed and the actions that were not related to PNSAN were discarded. The annual amounts obtained were corrected by the Consumer Price Index and updated for the year 2010. Since actions are part of specific programs, the sum of the resources allocated for all the actions of a program amounted to the resources invested in the program as a whole. The programs were then prioritized according to the amount of resources received in 2010. Of the 5 014 actions receiving federal funds in the study period, 814 were related to PNSAN (229 programs). There was growth in resources allocated for PNSAN programs, reaching US$ 15 billion in 2010 (an 82% increase over the previous year). The largest amount was invested in Bolsa Família, a cash transfer program. Ten programs received 90% of the funds, of which five were linked to food production processes. The amount of resources invested in the PNSAN and in actions and programs that promote food and nutrition security is increasing in Brazil.
Software for Building Models of 3D Objects via the Internet
NASA Technical Reports Server (NTRS)
Schramer, Tim; Jensen, Jeff
2003-01-01
The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.
CERT tribal internship program. Final intern report: Maria Perez, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-09-01
Historically, American Indian Tribes have lacked sufficient numbers of trained, technical personnel from their communities to serve their communities; tribal expertise in the fields of science, business and engineering being extremely rare and programs to encourage these disciplines almost non-existent. Subsequently, Tribes have made crucial decisions about their land and other facets of Tribal existence based upon outside technical expertise, such as that provided by the United States government and/or private industries. These outside expert opinions rarely took into account the traditional and cultural values of the Tribes being advised. The purpose of this internship was twofold: Create and maintainmore » a working relationship between CERT and Colorado State University (CSU) to plan for the Summit on Tribal human resource development; and Evaluate and engage in current efforts to strengthen the Tribal Resource Institute in Business, Engineering and Science (TRIBES) program. The intern lists the following as the project results: Positive interactions and productive meetings between CERT and CSU; Gathered information from Tribes; CERT database structure modification; Experience as facilitator in participating methods; Preliminary job descriptions for staff of future TRIBES programs; and Additions for the intern`s personal database of professional contacts and resources.« less
Designing a Zoo-Based Endangered Species Database.
ERIC Educational Resources Information Center
Anderson, Christopher L.
1989-01-01
Presented is a class activity that uses the database feature of the Appleworks program to create a database from which students may study endangered species. The use of a local zoo as a base of information about the animals is suggested. Procedures and follow-up activities are included. (CW)
Using Statistics for Database Management in an Academic Library.
ERIC Educational Resources Information Center
Hyland, Peter; Wright, Lynne
1996-01-01
Collecting statistical data about database usage by library patrons aids in the management of CD-ROM and database offerings, collection development, and evaluation of training programs. Two approaches to data collection are presented which should be used together: an automated or nonintrusive method which monitors search sessions while the…
77 FR 38277 - Wind and Water Power Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
..., modeling, and database efforts. This meeting will be a technical discussion to provide those involved in... ecological survey, modeling, and database efforts in the waters off the Mid-Atlantic. The workshop aims to... models and compatible Federal and regional databases. It is not the object of this session to obtain any...
Challenges in Database Design with Microsoft Access
ERIC Educational Resources Information Center
Letkowski, Jerzy
2014-01-01
Design, development and explorations of databases are popular topics covered in introductory courses taught at business schools. Microsoft Access is the most popular software used in those courses. Despite quite high complexity of Access, it is considered to be one of the most friendly database programs for beginners. A typical Access textbook…
Publications of Australian LIS Academics in Databases
ERIC Educational Resources Information Center
Wilson, Concepcion S.; Boell, Sebastian K.; Kennan, Mary Anne; Willard, Patricia
2011-01-01
This paper examines aspects of journal articles published from 1967 to 2008, located in eight databases, and authored or co-authored by academics serving for at least two years in Australian LIS programs from 1959 to 2008. These aspects are: inclusion of publications in databases, publications in journals, authorship characteristics of…
School-based Yoga Programs in the United States: A Survey
Butzer, Bethany; Ebert, Marina; Telles, Shirley; Khalsa, Sat Bir S.
2016-01-01
Context Substantial interest has begun to emerge around the implementation of yoga interventions in schools. Researchers have found that yoga practices may enhance skills such as self-regulation and prosocial behavior, and lead to improvements in students’ performance. These researchers, therefore, have proposed that contemplative practices have the potential to play a crucial role in enhancing the quality of US public education. Objective The purpose of the present study was to provide a summary and comparison of school-based yoga programs in the United States. Design Online, listserv, and database searches were conducted to identify programs, and information was collected regarding each program’s scope of work, curriculum characteristics, teacher-certification and training requirements, implementation models, modes of operation, and geographical regions. Setting The online, listserv, and database searches took place in Boston, MA, USA, and New Haven, CT, USA. Results Thirty-six programs were identified that offer yoga in more than 940 schools across the United States, and more than 5400 instructors have been trained by these programs to offer yoga in educational settings. Despite some variability in the exact mode of implementation, training requirements, locations served, and grades covered, the majority of the programs share a common goal of teaching 4 basic elements of yoga: (1) physical postures, (2) breathing exercises, (3) relaxation techniques, and (4) mindfulness and meditation practices. The programs also teach a variety of additional educational, social-emotional, and didactic techniques to enhance students’ mental and physical health and behavior. Conclusions The fact that the present study was able to find a relatively large number of formal, school-based yoga programs currently being implemented in the United States suggests that the programs may be acceptable and feasible to implement. The results also suggest that the popularity of school-based yoga programs may continue to grow. PMID:26535474
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800
Software for Managing Inventory of Flight Hardware
NASA Technical Reports Server (NTRS)
Salisbury, John; Savage, Scott; Thomas, Shirman
2003-01-01
The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.
Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.
USGS Mineral Resources Program; national maps and datasets for research and land planning
Nicholson, S.W.; Stoeser, D.B.; Ludington, S.D.; Wilson, Frederic H.
2001-01-01
The U.S. Geological Survey, the Nation’s leader in producing and maintaining earth science data, serves as an advisor to Congress, the Department of the Interior, and many other Federal and State agencies. Nationwide datasets that are easily available and of high quality are critical for addressing a wide range of land-planning, resource, and environmental issues. Four types of digital databases (geological, geophysical, geochemical, and mineral occurrence) are being compiled and upgraded by the Mineral Resources Program on regional and national scales to meet these needs. Where existing data are incomplete, new data are being collected to ensure national coverage. Maps and analyses produced from these databases provide basic information essential for mineral resource assessments and environmental studies, as well as fundamental information for regional and national land-use studies. Maps and analyses produced from the databases are instrumental to ongoing basic research, such as the identification of mineral deposit origins, determination of regional background values of chemical elements with known environmental impact, and study of the relationships between toxic elements or mining practices to human health. As datasets are completed or revised, the information is made available through a variety of media, including the Internet. Much of the available information is the result of cooperative activities with State and other Federal agencies. The upgraded Mineral Resources Program datasets make geologic, geophysical, geochemical, and mineral occurrence information at the state, regional, and national scales available to members of Congress, State and Federal government agencies, researchers in academia, and the general public. The status of the Mineral Resources Program datasets is outlined below.
Vakil, Rachit M.; Chaudhry, Zoobia W.; Doshi, Ruchi S.; Clark, Jeanne M.; Gudzune, Kimberly A.
2017-01-01
Objective To characterize weight-loss claims and disclaimers present on websites for commercial weight-loss programs and compare them to results from published randomized controlled trials (RCT). Methods We performed a content analysis of all homepages and testimonials available on the websites of 24 randomly selected programs. Two team members independently reviewed each page and abstracted information from text and images to capture relevant content including demographics, weight loss, and disclaimers. We performed a systematic review to evaluate the efficacy of these programs by searching MEDLINE and Cochrane Database of Systematic Reviews, and abstracted mean weight change from each included RCT. Results Overall, the amount of weight loss portrayed in the testimonials was extreme across all programs examined (range median weight loss 10.7 to 49.5 kg). Only 10 out of the 24 programs had eligible RCTs. Median weight losses reported in testimonials exceeded that achieved by trial participants. Most programs with RCTs (78%) provided disclaimers stating that the testimonial's results were non-typical and/or giving a range of typical weight loss. Conclusion Weight loss claims within testimonials were higher than results from RCTs. Future studies should examine whether commercial programs' advertising practices influence patients' expectations or satisfaction with modest weight loss results. PMID:28865085
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Nuclear data made easily accessible through the Notre Dame Nuclear Database
NASA Astrophysics Data System (ADS)
Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.
NASA Technical Reports Server (NTRS)
Wallace-Robinson, Janice; Dickson, Katherine J.; Hess, Elizabeth; Powers, Janet V.
1992-01-01
A 10-year cumulative bibliography of publications resulting from research supported by the Regulatory Physiology discipline of the Space Physiology and Countermeasures Program of NASA's Life Sciences Division is provided. Primary subjects included in this bibliography are circadian rhythms, endocrinology, fluid and electrolyte regulation, hematology, immunology, metabolism and nutrition, temperature regulation, and general regulatory physiology. General physiology references are also included. Principal investigators whose research tasks resulted in publication are identified by asterisk. Publications are identified by a record number corresponding with their entry in the Life Sciences Bibliographic Database, maintained at the George Washington University.
NASA Technical Reports Server (NTRS)
Benson, Robert F.; Truhlik, Vladimir; Huang, Xueqin; Wang, Yongli; Bilitza, Dieter
2012-01-01
The topside sounders of the International Satellites for Ionospheric Studies (ISIS) program were designed as analog systems. The resulting ionograms were displayed on 35 mm film for analysis by visual inspection. Each of these satellites, launched between 1962 and 1971, produced data for 10 to 20 years. A number of the original telemetry tapes from this large data set have been converted directly into digital records. Software, known as the Topside Ionogram Scalar With True-Height (TOPIST) algorithm, has been produced and used for the automatic inversion of the ionogram reflection traces on more than 100,000 ISIS-2 digital topside ionograms into topside vertical electron density profiles Ne(h). Here we present some topside ionospheric solar cycle variations deduced from the TOPIST database to illustrate the scientific benefit of improving and expanding the topside ionospheric Ne(h) database. The profile improvements will be based on improvements in the TOPIST software motivated by direct comparisons between TOPIST profiles and profiles produced by manual scaling in the early days of the ISIS program. The database expansion will be based on new software designed to overcome limitations in the original digital topside ionogram database caused by difficulties encountered during the analog-to-digital conversion process in the detection of the ionogram frame sync pulse and/or the frequency markers. This improved and expanded TOPIST topside Ne(h) database will greatly enhance investigations into both short- and long-term ionospheric changes, e.g., the observed topside ionospheric responses to magnetic storms, induced by interplanetary magnetic clouds, and solar cycle variations, respectively.
NASA Technical Reports Server (NTRS)
1997-01-01
The Aviation Safety Program initiated by NASA in 1997 has put greater emphasis in safety related research activities. Ice-contaminated-tailplane stall (ICTS) has been identified by the NASA Lewis Icing Technology Branch as an important activity for aircraft safety related research. The ICTS phenomenon is characterized as a sudden, often uncontrollable aircraft nose- down pitching moment, which occurs due to increased angle-of-attack of the horizontal tailplane resulting in tailplane stall. Typically, this phenomenon occurs when lowering the flaps during final approach while operating in or recently departing from icing conditions. Ice formation on the tailplane leading edge can reduce tailplane angle-of-attack range and cause flow separation resulting in a significant reduction or complete loss of aircraft pitch control. In 1993, the Federal Aviation Authority (FAA) and NASA embarked upon a four-year research program to address the problem of tailplane stall and to quantify the effect of tailplane ice accretion on aircraft performance and handling characteristics. The goals of this program, which was completed in March 1998, were to collect aerodynamic data for an aircraft tail with and without ice contamination and to develop analytical methods for predicting the effects of tailplane ice contamination. Extensive dry air and icing tunnel tests which resulted in a database of the aerodynamic effects associated with tailplane ice contamination. Although the FAA/NASA tailplane icing program generated some answers regarding ice-contaminated-tailplane stall (ICTS) phenomena, NASA researchers have found many open questions that warrant further investigation into ICTS. In addition, several aircraft manufacturers have expressed interest in a second research program to expand the database to other tail configurations and to develop experimental and computational methodologies for evaluating the ICTS phenomenon. In 1998, the icing branch at NASA Lewis initiated a second multi-phase research program for tailplane icing (TIP II) to develop test methodologies and tailplane performance and handling qualities evaluation tools. The main objectives of this new NASA/Industry/Academia collaborative research programs were: (1) define and evaluate a sub-scale wind tunnel test methodology for determining tailplane performance degradation due to icing. (2) develop an experimental database of tailplane aerodynamic performance with and without ice contamination for a range of tailplane configurations. Wind tunnel tests were planned with representative general aviation aircraft, i.e., the Learjet 45, and a twin engine low speed aircraft. This report summarizes the research performed during the first year of the study, and outlines the work tasks for the second year.
Usability study of the EduMod eLearning Program for contouring nodal stations of the head and neck.
Deraniyagala, Rohan; Amdur, Robert J; Boyer, Arthur L; Kaylor, Scott
2015-01-01
A major strategy for improving radiation oncology education and competence evaluation is to develop eLearning programs that reproduce the real work environment. A valuable measure of the quality of an eLearning program is "usability," which is a multidimensional endpoint defined from the end user's perspective. The gold standard for measuring usability is the Software Usability Measurement Inventory (SUMI). The purpose of this study is to use the SUMI to measure usability of an eLearning course that uses innovative software to teach and test contouring of nodal stations of the head and neck. This is a prospective institutional review board-approved study in which all participants gave written informed consent. The study population was radiation oncology residents from 8 different programs across the United States. The subjects had to pass all sections of the same 2 eLearning modules and then complete the SUMI usability evaluation instrument. We reached the accrual goal of 25 participants. Usability results for the EduMod eLearning course, "Nodal Stations of the Head and Neck," were compared with a large database of scores of other major software programs. Results were evaluated in 5 domains: Affect, Helpfulness, Control, Learnability, and Global Usability. In all 5 domains, usability scores for the study modules were higher than the database mean and statistically superior in 4 domains. This is the first study to evaluate usability of an eLearning program related to radiation oncology. Usability of 2 representative modules related to contouring nodal stations of the head and neck was highly favorable, with scores that were superior to the industry standard in multiple domains. These results support the continued development of this type of eLearning program for teaching and testing radiation oncology technical skills. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Visualization and manipulating the image of a formal data structure (FDS)-based database
NASA Astrophysics Data System (ADS)
Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien
1994-08-01
A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Antonio
Advanced Natural Language Processing Tools for Web Information Retrieval, Content Analysis, and Synthesis. The goal of this SBIR was to implement and evaluate several advanced Natural Language Processing (NLP) tools and techniques to enhance the precision and relevance of search results by analyzing and augmenting search queries and by helping to organize the search output obtained from heterogeneous databases and web pages containing textual information of interest to DOE and the scientific-technical user communities in general. The SBIR investigated 1) the incorporation of spelling checkers in search applications, 2) identification of significant phrases and concepts using a combination of linguisticmore » and statistical techniques, and 3) enhancement of the query interface and search retrieval results through the use of semantic resources, such as thesauri. A search program with a flexible query interface was developed to search reference databases with the objective of enhancing search results from web queries or queries of specialized search systems such as DOE's Information Bridge. The DOE ETDE/INIS Joint Thesaurus was processed to create a searchable database. Term frequencies and term co-occurrences were used to enhance the web information retrieval by providing algorithmically-derived objective criteria to organize relevant documents into clusters containing significant terms. A thesaurus provides an authoritative overview and classification of a field of knowledge. By organizing the results of a search using the thesaurus terminology, the output is more meaningful than when the results are just organized based on the terms that co-occur in the retrieved documents, some of which may not be significant. An attempt was made to take advantage of the hierarchy provided by broader and narrower terms, as well as other field-specific information in the thesauri. The search program uses linguistic morphological routines to find relevant entries regardless of whether terms are stored in singular or plural form. Implementation of additional inflectional morphology processes for verbs can enhance retrieval further, but this has to be balanced by the possibility of broadening the results too much. In addition to the DOE energy thesaurus, other sources of specialized organized knowledge such as the Medical Subject Headings (MeSH), the Unified Medical Language System (UMLS), and Wikipedia were investigated. The supporting role of the NLP thesaurus search program was enhanced by incorporating spelling aid and a part-of-speech tagger to cope with misspellings in the queries and to determine the grammatical roles of the query words and identify nouns for special processing. To improve precision, multiple modes of searching were implemented including Boolean operators, and field-specific searches. Programs to convert a thesaurus or reference file into searchable support files can be deployed easily, and the resulting files are immediately searchable to produce relevance-ranked results with builtin spelling aid, morphological processing, and advanced search logic. Demonstration systems were built for several databases, including the DOE energy thesaurus.« less
Shaath, M Kareem; Yeranosian, Michael G; Ippolito, Joseph A; Adams, Mark R; Sirkin, Michael S; Reilly, Mark C
2018-05-02
Orthopaedic trauma fellowship applicants use online-based resources when researching information on potential U.S. fellowship programs. The 2 primary sources for identifying programs are the Orthopaedic Trauma Association (OTA) database and the San Francisco Match (SF Match) database. Previous studies in other orthopaedic subspecialty areas have demonstrated considerable discrepancies among fellowship programs. The purpose of this study was to analyze content and availability of information on orthopaedic trauma surgery fellowship web sites. The online databases of the OTA and SF Match were reviewed to determine the availability of embedded program links or external links for the included programs. Thereafter, a Google search was performed for each program individually by typing the program's name, followed by the term "orthopaedic trauma fellowship." All identified fellowship web sites were analyzed for accessibility and content. Web sites were evaluated for comprehensiveness in mentioning key components of the orthopaedic trauma surgery curriculum. By consensus, we refined the final list of variables utilizing the methodology of previous studies on the topic. We identified 54 OTA-accredited fellowship programs, offering 87 positions. The majority (94%) of programs had web sites accessible through a Google search. Of the 51 web sites found, all (100%) described their program. Most commonly, hospital affiliation (88%), operative experiences (76%), and rotation overview (65%) were listed, and, least commonly, interview dates (6%), selection criteria (16%), on-call requirements (20%), and fellow evaluation criteria (20%) were listed. Programs with ≥2 fellows provided more information with regard to education content (p = 0.0001) and recruitment content (p = 0.013). Programs with Accreditation Council for Graduate Medical Education (ACGME) accreditation status also provided greater information with regard to education content (odds ratio, 4.0; p = 0.0001). Otherwise, no differences were seen by region, residency affiliation, medical school affiliation, or hospital affiliation. The SF Match and OTA databases provide few direct links to fellowship web sites. Individual program web sites do not effectively and completely convey information about the programs. The Internet is an underused resource for fellow recruitment. The lack of information on these sites allows for future opportunity to optimize this resource.
Bordeianou, Liliana; Cauley, Christy E; Antonelli, Donna; Bird, Sarah; Rattner, David; Hutter, Matthew; Mahmood, Sadiqa; Schnipper, Deborah; Rubin, Marc; Bleday, Ronald; Kenney, Pardon; Berger, David
2017-01-01
Two systems measure surgical site infection rates following colorectal surgeries: the American College of Surgeons National Surgical Quality Improvement Program and the Centers for Disease Control and Prevention National Healthcare Safety Network. The Centers for Medicare & Medicaid Services pay-for-performance initiatives use National Healthcare Safety Network data for hospital comparisons. This study aimed to compare database concordance. This is a multi-institution cohort study of systemwide Colorectal Surgery Collaborative. The National Surgical Quality Improvement Program requires rigorous, standardized data capture techniques; National Healthcare Safety Network allows 5 data capture techniques. Standardized surgical site infection rates were compared between databases. The Cohen κ-coefficient was calculated. This study was conducted at Boston-area hospitals. National Healthcare Safety Network or National Surgical Quality Improvement Program patients undergoing colorectal surgery were included. Standardized surgical site infection rates were the primary outcomes of interest. Thirty-day surgical site infection rates of 3547 (National Surgical Quality Improvement Program) vs 5179 (National Healthcare Safety Network) colorectal procedures (2012-2014). Discrepancies appeared: National Surgical Quality Improvement Program database of hospital 1 (N = 1480 patients) routinely found surgical site infection rates of approximately 10%, routinely deemed rate "exemplary" or "as expected" (100%). National Healthcare Safety Network data from the same hospital and time period (N = 1881) revealed a similar overall surgical site infection rate (10%), but standardized rates were deemed "worse than national average" 80% of the time. Overall, hospitals using less rigorous capture methods had improved surgical site infection rates for National Healthcare Safety Network compared with standardized National Surgical Quality Improvement Program reports. The correlation coefficient between standardized infection rates was 0.03 (p = 0.88). During 25 site-time period observations, National Surgical Quality Improvement Program and National Healthcare Safety Network data matched for 52% of observations (13/25). κ = 0.10 (95% CI, -0.1366 to 0.3402; p = 0.403), indicating poor agreement. This study investigated hospitals located in the Northeastern United States only. Variation in Centers for Medicare & Medicaid Services-mandated National Healthcare Safety Network infection surveillance methodology leads to unreliable results, which is apparent when these results are compared with standardized data. High-quality data would improve care quality and compare outcomes among institutions.
ERIC Educational Resources Information Center
Feinberg, Daniel A.
2017-01-01
This study examined the supports that female students sought out and found of value in an online database design course in a health informatics master's program. A target outcome was to help inform the practice of faculty and administrators in similar programs. Health informatics is a growing field that has faced shortages of qualified workers who…
ERIC Educational Resources Information Center
Gray, Peter J.
Ways a microcomputer can be used to establish and maintain an evaluation database and types of data management features possible on a microcomputer are described in this report, which contains step-by-step procedures and numerous examples for establishing a database, manipulating data, and designing and printing reports. Following a brief…
Meekers, Dominique; Rahaim, Stephen
2005-01-27
Over the past two decades, social marketing programs have become an important element of the national family planning and HIV prevention strategy in several developing countries. As yet, there has not been any comprehensive empirical assessment to determine which of several social marketing models is most effective for a given socio-economic context. Such an assessment is urgently needed to inform the design of future social marketing programs, and to avoid that programs are designed using an ineffective model. This study addresses this issue using a database of annual statistics about reproductive health oriented social marketing programs in over 70 countries. In total, the database covers 555 years of program experience with social marketing programs that distribute and promote the use of oral contraceptives and condoms. Specifically, our analysis assesses to what extent the model used by different reproductive health social marketing programs has varied across different socio-economic contexts. We then use random effects regression to test in which socio-economic context each of the models is most successful at increasing use of socially marketed oral contraceptives and condoms. The results show that there has been a tendency to design reproductive health social marketing program with a management structure that matches the local context. However, the evidence also shows that this has not always been the case. While socio-economic context clearly influences the effectiveness of some of the social marketing models, program maturity and the size of the target population appear equally important. To maximize the effectiveness of future social marketing programs, it is essential that more effort is devoted to ensuring that such programs are designed using the model or approach that is most suitable for the local context.
Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.
Brandt, C; Nadkarni, P
2001-01-01
The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.
NASA Astrophysics Data System (ADS)
Zyelyk, Ya. I.; Semeniv, O. V.
2015-12-01
The state of the problem of the post-launch calibration of the satellite electro-optic remote sensors and its solutions in Ukraine is analyzed. The database is improved and dynamic services for user interaction with database from the environment of open geographical information system Quantum GIS for information support of calibration activities are created. A dynamic application under QGIS is developed, implementing these services in the direction of the possibility of data entering, editing and extraction from the database, using the technology of object-oriented programming and of modern complex program design patterns. The functional and algorithmic support of this dynamic software and its interface are developed.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Peres, Frederico; Claudio, Luz
2013-01-01
The Fogarty International Center of the National Institutes of Health created the International Training and Research Program in Occupational and Environmental Health (ITREOH program) in 1995 with the aim to train environmental and occupational health scientists in developing countries. Mount Sinai School of Medicine was a grantee of this program since its inception, partnering with research institutions in Brazil, Chile, and Mexico. This article evaluates Mount Sinai's program in order to determine whether it has contributed to the specific research capacity needs of the international partners. Information was obtained from: (a) international and regional scientific literature databases; (b) databases from the three participating countries; and (c) MSSM ITREOH Program Database. Most of the research projects supported by the program were consistent with the themes found to be top priorities for the partner countries based on mortality/morbidity and research themes in the literature. Indirect effects of the training and the subsequent research projects completed by the trained fellows in the program included health policy changes and development of collaborative international projects. International research training programs, such as the MSSM ITREOH, that strengthen scientific research capacity in occupational and environmental health in Latin America can make a significant impact on the most pressing health issues in the partner countries. Copyright © 2012 Wiley Periodicals, Inc.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Ridge 2000 Data Management System
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.
2005-12-01
Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.
Automatic detection of anomalies in screening mammograms
2013-01-01
Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643
Stalder, Hanspeter; Hug, Corinne; Zanoni, Reto; Vogt, Hans-Rudolf; Peterhans, Ernst; Schweizer, Matthias; Bachofen, Claudia
2016-06-15
Pestiviruses infect a wide variety of animals of the order Artiodactyla, with bovine viral diarrhea virus (BVDV) being an economically important pathogen of livestock globally. BVDV is maintained in the cattle population by infecting fetuses early in gestation and, thus, by generating persistently infected (PI) animals that efficiently transmit the virus throughout their lifetime. In 2008, Switzerland started a national control campaign with the aim to eradicate BVDV from all bovines in the country by searching for and eliminating every PI cattle. Different from previous eradication programs, all animals of the entire population were tested for virus within one year, followed by testing each newborn calf in the subsequent four years. Overall, 3,855,814 animals were tested from 2008 through 2011, 20,553 of which returned an initial BVDV-positive result. We were able to obtain samples from at least 36% of all initially positive tested animals. We sequenced the 5' untranslated region (UTR) of more than 7400 pestiviral strains and compiled the sequence data in a database together with an array of information on the PI animals, among others, the location of the farm in which they were born, their dams, and the locations where the animals had lived. To our knowledge, this is the largest database combining viral sequences with animal data of an endemic viral disease. Using unique identification tags, the different datasets within the database were connected to run diverse molecular epidemiological analyses. The large sets of animal and sequence data made it possible to run analyses in both directions, i.e., starting from a likely epidemiological link, or starting from related sequences. We present the results of three epidemiological investigations in detail and a compilation of 122 individual investigations that show the usefulness of such a database in a country-wide BVD eradication program. Copyright © 2015 Elsevier B.V. All rights reserved.
Howe, E.A.; de Souza, A.; Lahr, D.L.; Chatwin, S.; Montgomery, P.; Alexander, B.R.; Nguyen, D.-T.; Cruz, Y.; Stonich, D.A.; Walzer, G.; Rose, J.T.; Picard, S.C.; Liu, Z.; Rose, J.N.; Xiang, X.; Asiedu, J.; Durkin, D.; Levine, J.; Yang, J.J.; Schürer, S.C.; Braisted, J.C.; Southall, N.; Southern, M.R.; Chung, T.D.Y.; Brudz, S.; Tanega, C.; Schreiber, S.L.; Bittker, J.A.; Guha, R.; Clemons, P.A.
2015-01-01
BARD, the BioAssay Research Database (https://bard.nih.gov/) is a public database and suite of tools developed to provide access to bioassay data produced by the NIH Molecular Libraries Program (MLP). Data from 631 MLP projects were migrated to a new structured vocabulary designed to capture bioassay data in a formalized manner, with particular emphasis placed on the description of assay protocols. New data can be submitted to BARD with a user-friendly set of tools that assist in the creation of appropriately formatted datasets and assay definitions. Data published through the BARD application program interface (API) can be accessed by researchers using web-based query tools or a desktop client. Third-party developers wishing to create new tools can use the API to produce stand-alone tools or new plug-ins that can be integrated into BARD. The entire BARD suite of tools therefore supports three classes of researcher: those who wish to publish data, those who wish to mine data for testable hypotheses, and those in the developer community who wish to build tools that leverage this carefully curated chemical biology resource. PMID:25477388
... compound (VOC) emissions, and more. U.S. Department of Agriculture (USDA) Water Quality Information Center Databases : online databases that may be related to water and agriculture. National Park Service (NPS) Water Quality Program : NPS ...
Database resources of the National Center for Biotechnology Information
Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Miller, Vadim; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Shumway, Martin; Sequeira, Edwin; Sherry, Steven T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L.; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene
2008-01-01
In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data available through NCBI's web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace, Assembly, and Short Read Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Entrez Probe, GENSAT, Database of Genotype and Phenotype, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting the web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:18045790
Generation of comprehensive thoracic oncology database--tool for translational research.
Surati, Mosmi; Robinson, Matthew; Nandi, Suvobroto; Faoro, Leonardo; Demchuk, Carley; Kanteti, Rajani; Ferguson, Benjamin; Gangadhar, Tara; Hensing, Thomas; Hasina, Rifat; Husain, Aliya; Ferguson, Mark; Karrison, Theodore; Salgia, Ravi
2011-01-22
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
NASA Technical Reports Server (NTRS)
Evans, R. W.; Brinza, D. E.
2014-01-01
Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model--minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.
A generic minimization random allocation and blinding system on web.
Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping
2006-12-01
Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.
MicroUse: The Database on Microcomputer Applications in Libraries and Information Centers.
ERIC Educational Resources Information Center
Chen, Ching-chih; Wang, Xiaochu
1984-01-01
Describes MicroUse, a microcomputer-based database on microcomputer applications in libraries and information centers which was developed using relational database manager dBASE II. The description includes its system configuration, software utilized, the in-house-developed dBASE programs, multifile structure, basic functions, MicroUse records,…
The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams, lakes, etc) in the National Hydrology Database (NHD). (A reach is the portion of a stream between two points of confluence. A confluence is the location where two or more streams flow together.)
Product Descriptions: Database Software for Science. A MicroSIFT Quarterly Report.
ERIC Educational Resources Information Center
Batey, Anne; And Others
Specific programs and software resources are described in this report on database software for science instruction. Materials are reviewed in the categories of: (1) database management (reviewing AppleWorks, Bank Street School Filer, FileVision, Friendly Filer, MECC DataQuest: The Composer, Scholastic PFS:File, PFS:Report); (2) data files…
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
ERIC Educational Resources Information Center
Hoffman, Tony
Sophisticated database management systems (DBMS) for microcomputers are becoming increasingly easy to use, allowing small school districts to develop their own autonomous databases for tracking enrollment and student progress in special education. DBMS applications can be designed for maintenance by district personnel with little technical…
Video Databases: An Emerging Tool in Business Education
ERIC Educational Resources Information Center
MacKinnon, Gregory; Vibert, Conor
2014-01-01
A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-25
...: Proposed Collection; Comment Request--National Hunger Clearinghouse Database Form AGENCY: Food and... Database Form. Form: FNS 543. OMB Number: 0584-0474. Expiration Date: 8/31/2012. Type of Request: Revision... Clearinghouse includes a database (FNS-543) of non- governmental, grassroots programs that work in the areas of...
An Experimental Investigation of Complexity in Database Query Formulation Tasks
ERIC Educational Resources Information Center
Casterella, Gretchen Irwin; Vijayasarathy, Leo
2013-01-01
Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... construct a database of regional small businesses that currently or may in the future participate in DOT direct and DOT funded transportation related contracts, and make this database available to OSDBU, upon request. 2. Utilize the database of regional transportation-related small businesses to match...
Ocean Drilling Program: Web Site Access Statistics
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main See statistics for JOIDES members. See statistics for Janus database. 1997 October November December accessible only on www-odp.tamu.edu. ** End of ODP, start of IODP. Privacy Policy ODP | Search | Database
Atomic Spectroscopic Databases at NIST
NASA Technical Reports Server (NTRS)
Reader, J.; Kramida, A. E.; Ralchenko, Yu.
2006-01-01
We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.
ERIC Educational Resources Information Center
English, Diana J.; Brandford, Carol C.; Coghlan, Laura
2000-01-01
Discusses the strengths and weaknesses of administrative databases, issues with their implementation and data analysis, and effective presentation of their data at different levels in child welfare organizations. Focuses on the development and implementation of Washington state's Children's Administration's administrative database, the Case and…
Database resources of the National Center for Biotechnology Information
Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Kenton, David L.; Khovayko, Oleg; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Sherry, Stephen T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Suzek, Tugba O.; Tatusov, Roman; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene
2006-01-01
In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Retroviral Genotyping Tools, HIV-1, Human Protein Interaction Database, SAGEmap, Gene Expression Omnibus, Entrez Probe, GENSAT, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of the resources can be accessed through the NCBI home page at: . PMID:16381840
Database resources of the National Center for Biotechnology Information.
Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; Dicuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian
2012-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
The Israel DNA database--the establishment of a rapid, semi-automated analysis system.
Zamir, Ashira; Dell'Ariccia-Carmon, Aviva; Zaken, Neomi; Oz, Carla
2012-03-01
The Israel Police DNA database, also known as IPDIS (Israel Police DNA Index System), has been operating since February 2007. During that time more than 135,000 reference samples have been uploaded and more than 2000 hits reported. We have developed an effective semi-automated system that includes two automated punchers, three liquid handler robots and four genetic analyzers. An inhouse LIMS program enables full tracking of every sample through the entire process of registration, pre-PCR handling, analysis of profiles, uploading to the database, hit reports and ultimately storage. The LIMS is also responsible for the future tracking of samples and their profiles to be expunged from the database according to the Israeli DNA legislation. The database is administered by an in-house developed software program, where reference and evidentiary profiles are uploaded, stored, searched and matched. The DNA database has proven to be an effective investigative tool which has gained the confidence of the Israeli public and on which the Israel National Police force has grown to rely. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Database resources of the National Center for Biotechnology Information
Acland, Abigail; Agarwala, Richa; Barrett, Tanya; Beck, Jeff; Benson, Dennis A.; Bollin, Colleen; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Church, Deanna M.; Clark, Karen; DiCuccio, Michael; Dondoshansky, Ilya; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Gorelenkov, Viatcheslav; Hoeppner, Marilu; Johnson, Mark; Kelly, Christopher; Khotomlianski, Viatcheslav; Kimchi, Avi; Kimelman, Michael; Kitts, Paul; Krasnov, Sergey; Kuznetsov, Anatoliy; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Karsch-Mizrachi, Ilene; Murphy, Terence; Ostell, James; O'Sullivan, Christopher; Panchenko, Anna; Phan, Lon; Pruitt, Don Preussm Kim D.; Rubinstein, Wendy; Sayers, Eric W.; Schneider, Valerie; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Siyan, Karanjit; Slotta, Douglas; Soboleva, Alexandra; Soussov, Vladimir; Starchenko, Grigory; Tatusova, Tatiana A.; Trawick, Bart W.; Vakatov, Denis; Wang, Yanli; Ward, Minghong; John Wilbur, W.; Yaschenko, Eugene; Zbicz, Kerry
2014-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, PubReader, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Primer-BLAST, COBALT, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, ClinVar, MedGen, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All these resources can be accessed through the NCBI home page. PMID:24259429
NASA Technical Reports Server (NTRS)
Johnson, Paul W.
2008-01-01
ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.
GénoPlante-Info (GPI): a collection of databases and bioinformatics resources for plant genomics
Samson, Delphine; Legeai, Fabrice; Karsenty, Emmanuelle; Reboux, Sébastien; Veyrieras, Jean-Baptiste; Just, Jeremy; Barillot, Emmanuel
2003-01-01
Génoplante is a partnership program between public French institutes (INRA, CIRAD, IRD and CNRS) and private companies (Biogemma, Bayer CropScience and Bioplante) that aims at developing genome analysis programs for crop species (corn, wheat, rapeseed, sunflower and pea) and model plants (Arabidopsis and rice). The outputs of these programs form a wealth of information (genomic sequence, transcriptome, proteome, allelic variability, mapping and synteny, and mutation data) and tools (databases, interfaces, analysis software), that are being integrated and made public at the public bioinformatics resource centre of Génoplante: GénoPlante-Info (GPI). This continuous flood of data and tools is regularly updated and will grow continuously during the coming two years. Access to the GPI databases and tools is available at http://genoplante-info.infobiogen.fr/. PMID:12519976
Spacecraft Orbit Design and Analysis (SODA), version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.; Davis, John S.
1989-01-01
The Spacecraft Orbit Design and Analysis (SODA) computer program, Version 1.0 is described. SODA is a spaceflight mission planning system which consists of five program modules integrated around a common database and user interface. SODA runs on a VAX/VMS computer with an EVANS & SUTHERLAND PS300 graphics workstation. BOEING RIM-Version 7 relational database management system performs transparent database services. In the current version three program modules produce an interactive three dimensional (3D) animation of one or more satellites in planetary orbit. Satellite visibility and sensor coverage capabilities are also provided. One module produces an interactive 3D animation of the solar system. Another module calculates cumulative satellite sensor coverage and revisit time for one or more satellites. Currently Earth, Moon, and Mars systems are supported for all modules except the solar system module.
CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.
Hallin, Peter F; Ussery, David W
2004-12-12
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.
Importance of Data Management in a Long-term Biological Monitoring Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty
2011-01-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Halford, Gary R. (Technical Monitor)
2003-01-01
This manual presents computer programs FLAPS for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the Total Strain version of Strainrange Partitioning (TS-SRP), and several other life prediction methods described in this manual. The user should be thoroughly familiar with the TS-SRP and these life prediction methods before attempting to use any of these programs. Improper understanding can lead to incorrect use of the method and erroneous life predictions. An extensive database has also been developed in a parallel effort. The database is probably the largest source of high-temperature, creep-fatigue test data available in the public domain and can be used with other life-prediction methods as well. This users' manual, software, and database are all in the public domain and can be obtained by contacting the author. The Compact Disk (CD) accompanying this manual contains an executable file for the FLAPS program, two datasets required for the example problems in the manual, and the creep-fatigue data in a format compatible with these programs.
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
New database for improving virtual system “body-dress”
NASA Astrophysics Data System (ADS)
Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.
2017-10-01
The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.
Iodine in food- and dietary supplement–composition databases123
Pehrsson, Pamela R; Patterson, Kristine Y; Spungen, Judith H; Wirtz, Mark S; Andrews, Karen W; Dwyer, Johanna T; Swanson, Christine A
2016-01-01
The US Food and Drug Administration (FDA) and the Nutrient Data Laboratory (NDL) of the USDA Agricultural Research Service have worked independently on determining the iodine content of foods and dietary supplements and are now harmonizing their efforts. The objective of the current article is to describe the harmonization plan and the results of initial iodine analyses accomplished under that plan. For many years, the FDA’s Total Diet Study (TDS) has measured iodine concentrations in selected foods collected in 4 regions of the country each year. For more than a decade, the NDL has collected and analyzed foods as part of the National Food and Nutrient Analysis Program; iodine analysis is now being added to the program. The NDL recently qualified a commercial laboratory to conduct iodine analysis of foods by an inductively coupled plasma mass spectrometry (ICP-MS) method. Co-analysis of a set of samples by the commercial laboratory using the ICP-MS method and by the FDA laboratory using its standard colorimetric method yielded comparable results. The FDA recently reviewed historical TDS data for trends in the iodine content of selected foods, and the NDL analyzed samples of a limited subset of those foods for iodine. The FDA and the NDL are working to combine their data on iodine in foods and to produce an online database that can be used for estimating iodine intake from foods in the US population. In addition, the NDL continues to analyze dietary supplements for iodine and, in collaboration with the NIH Office of Dietary Supplements, to publish the data online in the Dietary Supplement Ingredient Database. The goal is to provide, through these 2 harmonized databases and the continuing TDS focus on iodine, improved tools for estimating iodine intake in population studies. PMID:27534627
Mapping the literature of athletic training
Delwiche, Frances A.; Hall, Ellen F.
2007-01-01
Purpose: This paper identifies the core literature of athletic training and determines which major databases provide the most thorough intellectual access to this literature. Methods: This study collected all cited references from 2002 to 2004 of three journals widely read by those in the athletic training field. Bradford's Law of Scattering was applied to the resulting list to determine the core journal titles in the discipline. Three major databases were reviewed for extent of their coverage of these core journals. Results: Of the total 8,678 citations, one-third referenced a compact group of 6 journals; another third of the citations referenced an additional 40 titles. The remaining 2,837 citations were scattered across 1,034 additional journal titles. Conclusions: The number and scatter of citations over a three-year period identified forty-six key journals in athletic training. The study results can inform athletic trainers of the core literature in their field, encourage database producers (e.g., MEDLINE, SPORTDiscus, CINAHL) to increase coverage of titles that are not indexed or underindexed, and guide purchasing decisions for libraries serving athletic training programs. PMID:17443253
The COG database: a tool for genome-scale analysis of protein functions and evolution
Tatusov, Roman L.; Galperin, Michael Y.; Natale, Darren A.; Koonin, Eugene V.
2000-01-01
Rational classification of proteins encoded in sequenced genomes is critical for making the genome sequences maximally useful for functional and evolutionary studies. The database of Clusters of Orthologous Groups of proteins (COGs) is an attempt on a phylogenetic classification of the proteins encoded in 21 complete genomes of bacteria, archaea and eukaryotes (http://www.ncbi.nlm.nih.gov/COG ). The COGs were constructed by applying the criterion of consistency of genome-specific best hits to the results of an exhaustive comparison of all protein sequences from these genomes. The database comprises 2091 COGs that include 56–83% of the gene products from each of the complete bacterial and archaeal genomes and ~35% of those from the yeast Saccharomyces cerevisiae genome. The COG database is accompanied by the COGNITOR program that is used to fit new proteins into the COGs and can be applied to functional and phylogenetic annotation of newly sequenced genomes. PMID:10592175
GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species
Kumar, Sujai; Stevens, Lewis; Blaxter, Mark
2017-01-01
Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774
An international aerospace information system: A cooperative opportunity
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Blados, Walter R.
1992-01-01
Scientific and technical information (STI) is a valuable resource which represents the results of large investments in research and development (R&D), and the expertise of a nation. NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. We see information and information systems changing and becoming more international in scope. In Europe, consistent with joint R&D programs and a view toward a united Europe, we have seen the emergence of a European Aerospace Database concept. In addition, the development of aeronautics and astronautics in individual nations have also lead to initiatives for national aerospace databases. Considering recent technological developments in information science and technology, as well as the reality of scarce resources in all nations, it is time to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing. The new possibilities offered through cooperation among the various aerospace database efforts toward an international aerospace database initiative which can optimize the cost/benefit equation for all participants are considered.
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
2012-01-01
Background The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Results Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC’s NUCOCOG dataset as the largest one available for that purpose thus far. Conclusions Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills. PMID:22958836
ThermoBuild: Online Method Made Available for Accessing NASA Glenn Thermodynamic Data
NASA Technical Reports Server (NTRS)
McBride, Bonnie; Zehe, Michael J.
2004-01-01
The new Web site program "ThermoBuild" allows users to easily access and use the NASA Glenn Thermodynamic Database of over 2000 solid, liquid, and gaseous species. A convenient periodic table allows users to "build" the molecules of interest and designate the temperature range over which thermodynamic functions are to be displayed. ThermoBuild also allows users to build custom databases for use with NASA's Chemical Equilibrium with Applications (CEA) program or other programs that require the NASA format for thermodynamic properties. The NASA Glenn Research Center has long been a leader in the compilation and dissemination of up-to-date thermodynamic data, primarily for use with the NASA CEA program, but increasingly for use with other computer programs.
Implementation and Evaluation of Microcomputer Systems for the Republic of Turkey’s Naval Ships.
1986-03-01
important database design tool for both logical and physical database design, such as flowcharts or pseudocodes are used for program design. Logical...string manipulation in FORTRAN is difficult but not impossible. BASIC ( Beginners All-Purpose Symbolic Instruction Code): Basic is currently the most...63 APPENDIX B GLOSSARY/ACRONYM LIST AC Alternating Current AP Application Program BASIC Beginners All-purpose Symbolic Instruction Code CCP
Database Management in Design Optimization.
1983-10-30
processing program(s) engaged in the task of preparing input data for the (finite-element) analysis and optimization phases primary storage the main...and extraction of data from the database for further processing . It can be divided into two phases: a) The process of selection and identification of ...user wishes to stop the reading or the writing process . The meaning of END depends on the method specified for retrieving data: a) Row-wise - then
The Reach Address Database (RAD) stores reach address information for each Water Program feature that has been linked to the underlying surface water features (streams, lakes, etc) in the National Hydrology Database (NHD) Plus dataset.
Ocean Drilling Program: Mirror Sites
Publication services and products Drilling services and tools Online Janus database Search the ODP/TAMU web information, see www.iodp-usio.org. ODP | Search | Database | Drilling | Publications | Science | Cruise Info
Ocean Drilling Program: TAMU Staff Directory
products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site Employment Opportunities ODP | Search | Database | Drilling | Publications | Science | Cruise Info | Public
Ocean Drilling Program: Publication Services: Online Manuscript Submission
products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP/TAMU Science Operator Home ODP's main web site Publications Policy Author Instructions Scientific Results Manuscript use the submission and review forms available on the IODP-USIO publications web site. ODP | Search
NASA Technical Reports Server (NTRS)
2004-01-01
This is a listing of recent unclassified RTO technical publications processed by the NASA Center for AeroSpace Information from July 1, 2004 through September 30, 2004 available on the NASA Aeronautics and Space Database. Topics covered include: military training; personal active noise reduction; electric combat vehicles.
NASA Technical Reports Server (NTRS)
2000-01-01
This is a quarterly listing of unclassified AGARD and RTO technical publications NASA received and announced in the NASA STI Database. Contents include 1) Sensor Data Fusion and Integration of the Human Element; 2) Planar Optical Measurement Methods for Gas Turbine Components; 3) RTO Highlights 1998, December 1998.
O’Suilleabhain, Padraig E.; Sanghera, Manjit; Patel, Neepa; Khemani, Pravin; Lacritz, Laura H.; Chitnis, Shilpa; Whitworth, Louis A.; Dewey, Richard B.
2016-01-01
Objective To develop a process to improve patient outcomes from deep brain stimulation (DBS) surgery for Parkinson disease (PD), essential tremor (ET), and dystonia. Methods We employed standard quality improvement methodology using the Plan-Do-Study-Act process to improve patient selection, surgical DBS lead implantation, postoperative programming, and ongoing assessment of patient outcomes. Results The result of this quality improvement process was the development of a neuromodulation network. The key aspect of this program is rigorous patient assessment of both motor and non-motor outcomes tracked longitudinally using a REDCap database. We describe how this information is used to identify problems and to initiate Plan-Do-Study-Act cycles to address them. Preliminary outcomes data is presented for the cohort of PD and ET patients who have received surgery since the creation of the neuromodulation network. Conclusions Careful outcomes tracking is essential to ensure quality in a complex therapeutic endeavor like DBS surgery for movement disorders. The REDCap database system is well suited to store outcomes data for the purpose of ongoing quality assurance monitoring. PMID:27711133
Overview of Faculty Development Programs for Interprofessional Education.
Ratka, Anna; Zorek, Joseph A; Meyer, Susan M
2017-06-01
Objectives. To describe characteristics of faculty development programs designed to facilitate interprofessional education, and to compile recommendations for development, delivery, and assessment of such faculty development programs. Methods. MEDLINE, CINAHL, ERIC, and Web of Science databases were searched using three keywords: faculty development, interprofessional education, and health professions. Articles meeting inclusion criteria were analyzed for emergent themes, including program design, delivery, participants, resources, and assessment. Results. Seventeen articles were identified for inclusion, yielding five characteristics of a successful program: institutional support; objectives and outcomes based on interprofessional competencies; focus on consensus-building and group facilitation skills; flexibility based on institution- and participant-specific characteristics; and incorporation of an assessment strategy. Conclusion. The themes and characteristics identified in this literature overview may support development of faculty development programs for interprofessional education. An advanced evidence base for interprofessional education faculty development programs is needed.
WATERS Terms of Use and Disclaimer
The Reach Address Database (RAD) stores reach address information for each Water Program feature that has been linked to the underlying surface water features (streams, lakes, etc) in the National Hydrology Database (NHD) Plus dataset.
50 CFR 660.150 - Mothership (MS) Coop Program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... record in the NMFS permit database. The application will contain the basis of NMFS' calculation. The... registration as listed in the NMFS permit database, or in the identification of the mothership owner or...
50 CFR 660.150 - Mothership (MS) Coop Program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... record in the NMFS permit database. The application will contain the basis of NMFS' calculation. The... registration as listed in the NMFS permit database, or in the identification of the mothership owner or...
Harris, Eric S J; Erickson, Sean D; Tolopko, Andrew N; Cao, Shugeng; Craycroft, Jane A; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E; Eisenberg, David M
2011-05-17
Ethnobotanically driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically driven natural product collection and drug-discovery programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
CRAVE: a database, middleware and visualization system for phenotype ontologies.
Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M
2005-04-01
A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.
BEAUTY-X: enhanced BLAST searches for DNA queries.
Worley, K C; Culpepper, P; Wiese, B A; Smith, R F
1998-01-01
BEAUTY (BLAST Enhanced Alignment Utility) is an enhanced version of the BLAST database search tool that facilitates identification of the functions of matched sequences. Three recent improvements to the BEAUTY program described here make the enhanced output (1) available for DNA queries, (2) available for searches of any protein database, and (3) more up-to-date, with periodic updates of the domain information. BEAUTY searches of the NCBI and EMBL non-redundant protein sequence databases are available from the BCM Search Launcher Web pages (http://gc.bcm.tmc. edu:8088/search-launcher/launcher.html). BEAUTY Post-Processing of submitted search results is available using the BCM Search Launcher Batch Client (version 2.6) (ftp://gc.bcm.tmc. edu/pub/software/search-launcher/). Example figures are available at http://dot.bcm.tmc. edu:9331/papers/beautypp.html (kworley,culpep)@bcm.tmc.edu
Crowd-Sourcing with K-12 citizen scientists: The Continuing Evolution of the GLOBE Program
NASA Astrophysics Data System (ADS)
Murphy, T.; Wegner, K.; Andersen, T. J.
2016-12-01
Twenty years ago, the Internet was still in its infancy, citizen science was a relatively unknown term, and the idea of a global citizen science database was unheard of. Then the Global Learning and Observations to Benefit the Environment (GLOBE) Program was proposed and this all changed. GLOBE was one of the first K-12 citizen science programs on a global scale. An initial large scale ramp-up of the program was followed by the establishment of a network of partners in countries and within the U.S. Now in the 21st century, the program has over 50 protocols in atmosphere, biosphere, hydrosphere and pedosphere, almost 140 million measurements in the database, a visualization system, collaborations with NASA satellite mission scientists (GPM, SMAP) and other scientists, as well as research projects by GLOBE students. As technology changed over the past two decades, it was integrated into the program's outreach efforts to existing and new members with the result that the program now has a strong social media presence. In 2016, a new app was launched which opened up GLOBE and data entry to citizen scientists of all ages. The app is aimed at fresh audiences, beyond the traditional GLOBE K-12 community. Groups targeted included: scouting organizations, museums, 4H, science learning centers, retirement communities, etc. to broaden participation in the program and increase the number of data available to students and scientists. Through the 20 years of GLOBE, lessons have been learned about changing the management of this type of large-scale program, the use of technology to enhance and improve the experience for members, and increasing community involvement in the program.
Efficient GO2/GH2 Injector Design: A NASA, Industry and University Cooperative Effort
NASA Technical Reports Server (NTRS)
Tucker, P. K.; Klem, M. D.; Fisher, S. C.; Santoro, R. J.
1997-01-01
Developing new propulsion components in the face of shrinking budgets presents a significant challenge. The technical, schedule and funding issues common to any design/development program are complicated by the ramifications of the continuing decrease in funding for the aerospace industry. As a result, new working arrangements are evolving in the rocket industry. This paper documents a successful NASA, industry, and university cooperative effort to design efficient high performance GO2/GH2 rocket injector elements in the current budget environment. The NASA Reusable Launch Vehicle (RLV) Program initially consisted of three vehicle/engine concepts targeted at achieving single stage to orbit. One of the Rocketdyne propulsion concepts, the RS 2100 engine, used a full-flow staged-combustion cycle. Therefore, the RS 2100 main injector would combust GO2/GH 2 propellants. Early in the design phase, but after budget levels and contractual arrangements had been set the limitations of the current gas/gas injector database were identified. Most of the relevant information was at least twenty years old. Designing high performance injectors to meet the RS 2100 requirements would require the database to be updated and significantly enhanced. However, there was no funding available to address the need for more data. NASA proposed a teaming arrangement to acquire the updated information without additional funds from the RLV Program. A determination of the types and amounts of data needed was made along with test facilities with capabilities to meet the data requirements, budget constraints, and schedule. After several iterations a program was finalized and a team established to satisfy the program goals. The Gas/Gas Injector Technology (GGIT) Program had the overall goal of increasing the ability of the rocket engine community to design efficient high-performance, durable gas/gas injectors relevant to RLV requirements. First, the program would provide Rocketdyne with data on preliminary gas/gas injector designs which would enable discrimination among candidate injector designs. Secondly, the program would enhance the national gas/gas database by obtaining high-quality data that increases the understanding of gas/gas injector physics and is suitable for computational fluid dynamics (CFD) code validation. The third program objective was to validate CFD codes for future gas/gas injector design in the RLV program.
ERIC Educational Resources Information Center
McGrew, Kevin; And Others
This research analyzes similarities and differences in how students with disabilities are identified in national databases, through examination of 19 national data collection programs in the U.S. Departments of Education, Commerce, Justice, and Health and Human Services, as well as databases from the National Science Foundation. The study found…
2014-06-01
and Coastal Data Information Program ( CDIP ). This User’s Guide includes step-by-step instructions for accessing the GLOS/GLCFS database via WaveNet...access, processing and analysis tool; part 3 – CDIP database. ERDC/CHL CHETN-xx-14. Vicksburg, MS: U.S. Army Engineer Research and Development Center
Ocean Drilling Program: Information Services: Database Services
Available Examples of Data Core Photos Logging Database (LDEO-BRG) RIDGE Petrological Database (LDEO from postcruise research All ODP and DSDP core photos All ODP data are available online through Janus proprietary for a period of one year after the end of a cruise and are available only to the participating
7 CFR 400.55 - Qualification for actual production history coverage program.
Code of Federal Regulations, 2013 CFR
2013-01-01
... APH yield is calculated from a database containing a minimum of four yields and will be updated each subsequent crop year. The database may contain a maximum of the 10 most recent crop years and may include... only occur in the database when there are less than four years of actual and/or assigned yields. (b...
7 CFR 400.55 - Qualification for actual production history coverage program.
Code of Federal Regulations, 2011 CFR
2011-01-01
... APH yield is calculated from a database containing a minimum of four yields and will be updated each subsequent crop year. The database may contain a maximum of the 10 most recent crop years and may include... only occur in the database when there are less than four years of actual and/or assigned yields. (b...
7 CFR 400.55 - Qualification for actual production history coverage program.
Code of Federal Regulations, 2010 CFR
2010-01-01
... APH yield is calculated from a database containing a minimum of four yields and will be updated each subsequent crop year. The database may contain a maximum of the 10 most recent crop years and may include... only occur in the database when there are less than four years of actual and/or assigned yields. (b...
76 FR 68811 - Notice of Request for the Revision of Currently Approved Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-07
... information collection: 49 U.S.C. 5335(a) and (b) National Transit Database (NTD). DATES: Comments must be... CONTACT: John D. Giorgis, National Transit Database Program Manager, FTA Office of Budget and Policy, (202... Transit Database. (OMB Number: 2132-0008). Background: 49 U.S.C. 5335(a) and (b) requires the Secretary of...
7 CFR 400.55 - Qualification for actual production history coverage program.
Code of Federal Regulations, 2012 CFR
2012-01-01
... APH yield is calculated from a database containing a minimum of four yields and will be updated each subsequent crop year. The database may contain a maximum of the 10 most recent crop years and may include... only occur in the database when there are less than four years of actual and/or assigned yields. (b...
ERIC Educational Resources Information Center
Li, Yiu-On; Leung, Shirley W.
2001-01-01
Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…
Common Database Interface for Heterogeneous Software Engineering Tools.
1987-12-01
SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager
Radiation Embrittlement Archive Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klasky, Hilda B; Bass, Bennett Richard; Williams, Paul T
2013-01-01
The Radiation Embrittlement Archive Project (REAP), which is being conducted by the Probabilistic Integrity Safety Assessment (PISA) Program at Oak Ridge National Laboratory under funding from the U.S. Nuclear Regulatory Commission s (NRC) Office of Nuclear Regulatory Research, aims to provide an archival source of information about the effect of neutron radiation on the properties of reactor pressure vessel (RPV) steels. Specifically, this project is an effort to create an Internet-accessible RPV steel embrittlement database. The project s website, https://reap.ornl.gov, provides information in two forms: (1) a document archive with surveillance capsule(s) reports and related technical reports, in PDF format,more » for the 104 commercial nuclear power plants (NPPs) in the United States, with similar reports from other countries; and (2) a relational database archive with detailed information extracted from the reports. The REAP project focuses on data collected from surveillance capsule programs for light-water moderated, nuclear power reactor vessels operated in the United States, including data on Charpy V-notch energy testing results, tensile properties, composition, exposure temperatures, neutron flux (rate of irradiation damage), and fluence, (Fast Neutron Fluence a cumulative measure of irradiation for E>1 MeV). Additionally, REAP contains data from surveillance programs conducted in other countries. REAP is presently being extended to focus on embrittlement data analysis, as well. This paper summarizes the current status of the REAP database and highlights opportunities to access the data and to participate in the project.« less
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Büssow, Konrad; Hoffmann, Steve; Sievert, Volker
2002-12-19
Functional genomics involves the parallel experimentation with large sets of proteins. This requires management of large sets of open reading frames as a prerequisite of the cloning and recombinant expression of these proteins. A Java program was developed for retrieval of protein and nucleic acid sequences and annotations from NCBI GenBank, using the XML sequence format. Annotations retrieved by ORFer include sequence name, organism and also the completeness of the sequence. The program has a graphical user interface, although it can be used in a non-interactive mode. For protein sequences, the program also extracts the open reading frame sequence, if available, and checks its correct translation. ORFer accepts user input in the form of single or lists of GenBank GI identifiers or accession numbers. It can be used to extract complete sets of open reading frames and protein sequences from any kind of GenBank sequence entry, including complete genomes or chromosomes. Sequences are either stored with their features in a relational database or can be exported as text files in Fasta or tabulator delimited format. The ORFer program is freely available at http://www.proteinstrukturfabrik.de/orfer. The ORFer program allows for fast retrieval of DNA sequences, protein sequences and their open reading frames and sequence annotations from GenBank. Furthermore, storage of sequences and features in a relational database is supported. Such a database can supplement a laboratory information system (LIMS) with appropriate sequence information.
An Investigation of Teaching and Learning Programs in Pharmacy Education
Baia, Patricia
2016-01-01
Objective. To investigate published, peer-reviewed literature on pharmacy teaching and learning development programs and to synthesize existing data, examine reported efficacy and identify future areas for research. Methods. Medline and ERIC databases were searched for studies on teaching development programs published between 2001 and 2015. Results. Nineteen publications were included, representing 21 programs. Twenty programs were resident teaching programs, one program described faculty development. The majority of programs spanned one year and delivered instruction on teaching methodologies and assessment measures. All except one program included experiential components. Thirteen publications presented outcomes data; most measured satisfaction and self-perceived improvement. Conclusion. Published literature on teacher development in pharmacy is focused more on training residents than on developing faculty members. Although programs are considered important and highly valued by program directors and participants, little data substantiates that these programs improve teaching. Future research could focus on measurement of program outcomes and documentation of teaching development for existing faculty members. PMID:27293226
An Investigation of Teaching and Learning Programs in Pharmacy Education.
Strang, Aimee F; Baia, Patricia
2016-05-25
Objective. To investigate published, peer-reviewed literature on pharmacy teaching and learning development programs and to synthesize existing data, examine reported efficacy and identify future areas for research. Methods. Medline and ERIC databases were searched for studies on teaching development programs published between 2001 and 2015. Results. Nineteen publications were included, representing 21 programs. Twenty programs were resident teaching programs, one program described faculty development. The majority of programs spanned one year and delivered instruction on teaching methodologies and assessment measures. All except one program included experiential components. Thirteen publications presented outcomes data; most measured satisfaction and self-perceived improvement. Conclusion. Published literature on teacher development in pharmacy is focused more on training residents than on developing faculty members. Although programs are considered important and highly valued by program directors and participants, little data substantiates that these programs improve teaching. Future research could focus on measurement of program outcomes and documentation of teaching development for existing faculty members.
Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D
2013-06-19
The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.
Optimization of analytical laboratory work using computer networking and databasing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upp, D.L.; Metcalf, R.A.
1996-06-01
The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs.more » All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.« less
75 FR 12226 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-15
... include: the Federal Pell Grant Program; the Academic Competitiveness Grant Program; the National Science... the VIS database for the purpose of confirming the immigration status of applicants for assistance, as...
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D
Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less
NASA Technical Reports Server (NTRS)
Beeson, Harold D.; Davis, Dennis D.; Ross, William L., Sr.; Tapphorn, Ralph M.
2002-01-01
This document represents efforts accomplished at the NASA Johnson Space Center White Sands Test Facility (WSTF) in support of the Enhanced Technology for Composite Overwrapped Pressure Vessels (COPV) Program, a joint research and technology effort among the U.S. Air Force, NASA, and the Aerospace Corporation. WSTF performed testing for several facets of the program. Testing that contributed to the Task 3.0 COPV database extension objective included baseline structural strength, failure mode and safe-life, impact damage tolerance, sustained load/impact effect, and materials compatibility. WSTF was also responsible for establishing impact protection and control requirements under Task 8.0 of the program. This included developing a methodology for establishing an impact control plan. Seven test reports detail the work done at WSTF. As such, this document contributes to the database of information regarding COPV behavior that will ensure performance benefits and safety are maintained throughout vessel service life.
Development of a statewide trauma registry using multiple linked sources of data.
Clark, D. E.
1993-01-01
In order to develop a cost-effective method of injury surveillance and trauma system evaluation in a rural state, computer programs were written linking records from two major hospital trauma registries, a statewide trauma tracking study, hospital discharge abstracts, death certificates, and ambulance run reports. A general-purpose database management system, programming language, and operating system were used. Data from 1991 appeared to be successfully linked using only indirect identifying information. Familiarity with local geography and the idiosyncracies of each data source were helpful in programming for effective matching of records. For each individual case identified in this way, data from all available sources were then merged and imported into a standard database format. This inexpensive, population-based approach, maintaining flexibility for end-users with some database training, may be adaptable for other regions. There is a need for further improvement and simplification of the record-linkage process for this and similar purposes. PMID:8130556
National briefing summaries: Nuclear fuel cycle and waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, K.J.; Bradley, D.J.; Fletcher, J.F.
Since 1976, the International Program Support Office (IPSO) at the Pacific Northwest Laboratory (PNL) has collected and compiled publicly available information concerning foreign and international radioactive waste management programs. This National Briefing Summaries is a printout of an electronic database that has been compiled and is maintained by the IPSO staff. The database contains current information concerning the radioactive waste management programs (with supporting information on nuclear power and the nuclear fuel cycle) of most of the nations (except eastern European countries) that now have or are contemplating nuclear power, and of the multinational agencies that are active in radioactivemore » waste management. Information in this document is included for three additional countries (China, Mexico, and USSR) compared to the prior issue. The database and this document were developed in response to needs of the US Department of Energy.« less
Ocean Drilling Program: Science Operator Search Engine
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main -USIO site, plus IODP, ODP, and DSDP Publications, together or separately. ODP | Search | Database
NASA Technical Reports Server (NTRS)
Knighton, Donna L.
1992-01-01
A Flight Test Engineering Database Management System (FTE DBMS) was designed and implemented at the NASA Dryden Flight Research Facility. The X-29 Forward Swept Wing Advanced Technology Demonstrator flight research program was chosen for the initial system development and implementation. The FTE DBMS greatly assisted in planning and 'mass production' card preparation for an accelerated X-29 research program. Improved Test Plan tracking and maneuver management for a high flight-rate program were proven, and flight rates of up to three flights per day, two times per week were maintained.
A statistical view of FMRFamide neuropeptide diversity.
Espinoza, E; Carrigan, M; Thomas, S G; Shaw, G; Edison, A S
2000-01-01
FMRFamide-like peptide (FLP) amino acid sequences have been collected and statistically analyzed. FLP amino acid composition as a function of position in the peptide is graphically presented for several major phyla. Results of total amino acid composition and frequencies of pairs of FLP amino acids have been computed and compared with corresponding values from the entire GenBank protein sequence database. The data for pairwise distributions of amino acids should help in future structure-function studies of FLPs. To aid in future peptide discovery, a computer program and search protocol was developed to identify FLPs from the GenBank protein database without the use of keywords.
Bowker, S L; Savu, A; Donovan, L E; Johnson, J A; Kaul, P
2017-06-01
To examine the validity of International Classification of Disease, version 10 (ICD-10) codes for gestational diabetes mellitus in administrative databases (outpatient and inpatient), and in a clinical perinatal database (Alberta Perinatal Health Program), using laboratory data as the 'gold standard'. Women aged 12-54 years with in-hospital, singleton deliveries between 1 October 2008 and 31 March 2010 in Alberta, Canada were included in the study. A gestational diabetes diagnosis was defined in the laboratory data as ≥2 abnormal values on a 75-g oral glucose tolerance test or a 50-g glucose screen ≥10.3 mmol/l. Of 58 338 pregnancies, 2085 (3.6%) met gestational diabetes criteria based on laboratory data. The gestational diabetes rates in outpatient only, inpatient only, outpatient or inpatient combined, and Alberta Perinatal Health Program databases were 5.2% (3051), 4.8% (2791), 5.8% (3367) and 4.8% (2825), respectively. Although the outpatient or inpatient combined data achieved the highest sensitivity (92%) and specificity (97%), it was associated with a positive predictive value of only 57%. The majority of the false-positives (78%), however, had one abnormal value on oral glucose tolerance test, corresponding to a diagnosis of impaired glucose tolerance in pregnancy. The ICD-10 codes for gestational diabetes in administrative databases, especially when outpatient and inpatient databases are combined, can be used to reliably estimate the burden of the disease at the population level. Because impaired glucose tolerance in pregnancy and gestational diabetes may be managed similarly in clinical practice, impaired glucose tolerance in pregnancy is often coded as gestational diabetes. © 2016 Diabetes UK.
Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu
2015-01-01
The microbial genome database for comparative analysis (MBGD) (available at http://mbgd.genome.ad.jp/) is a comprehensive ortholog database for flexible comparative analysis of microbial genomes, where the users are allowed to create an ortholog table among any specified set of organisms. Because of the rapid increase in microbial genome data owing to the next-generation sequencing technology, it becomes increasingly challenging to maintain high-quality orthology relationships while allowing the users to incorporate the latest genomic data available into an analysis. Because many of the recently accumulating genomic data are draft genome sequences for which some complete genome sequences of the same or closely related species are available, MBGD now stores draft genome data and allows the users to incorporate them into a user-specific ortholog database using the MyMBGD functionality. In this function, draft genome data are incorporated into an existing ortholog table created only from the complete genome data in an incremental manner to prevent low-quality draft data from affecting clustering results. In addition, to provide high-quality orthology relationships, the standard ortholog table containing all the representative genomes, which is first created by the rapid classification program DomClust, is now refined using DomRefine, a recently developed program for improving domain-level clustering using multiple sequence alignment information. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata
Granato, Gregory E.; Tessler, Steven
2001-01-01
A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. The stratified metadatabase design for the NDAMS program is presented in the MS Access file DBDESIGN.mdb and documented with a data dictionary in the NDAMS_DD.mdb file recorded on the CD-ROM. The data dictionary file includes complete documentation of the table names, table descriptions, and information about each of the 419 fields in the database.
WaveNet: A Web-Based Metocean Data Access, Processing, and Analysis Tool. Part 3 - CDIP Database
2014-06-01
and Analysis Tool; Part 3 – CDIP Database by Zeki Demirbilek, Lihwa Lin, and Derek Wilson PURPOSE: This Coastal and Hydraulics Engineering...Technical Note (CHETN) describes coupling of the Coastal Data Information Program ( CDIP ) database to WaveNet, the first module of MetOcnDat (Meteorological...provides a step-by-step procedure to access, process, and analyze wave and wind data from the CDIP database. BACKGROUND: WaveNet addresses a basic
Multi-Resolution Playback of Network Trace Files
2015-06-01
a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.
2011-01-01
Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.
The Space Systems Environmental Test Facility Database (SSETFD), Website Development Status
NASA Technical Reports Server (NTRS)
Snyder, James M.
2008-01-01
The Aerospace Corporation has been developing a database of U.S. environmental test laboratory capabilities utilized by the space systems hardware development community. To date, 19 sites have been visited by The Aerospace Corporation and verbal agreements reached to include their capability descriptions in the database. A website is being developed to make this database accessible by all interested government, civil, university and industry personnel. The website will be accessible by all interested in learning more about the extensive collective capability that the US based space industry has to offer. The Environments, Test & Assessment Department within The Aerospace Corporation will be responsible for overall coordination and maintenance of the database. Several US government agencies are interested in utilizing this database to assist in the source selection process for future spacecraft programs. This paper introduces the website by providing an overview of its development, location and search capabilities. It will show how the aerospace community can apply this new tool as a way to increase the utilization of existing lab facilities, and as a starting point for capital expenditure/upgrade trade studies. The long term result is expected to be increased utilization of existing laboratory capability and reduced overall development cost of space systems hardware. Finally, the paper will present the process for adding new participants, and how the database will be maintained.
Analysis and preliminary design of Kunming land use and planning management information system
NASA Astrophysics Data System (ADS)
Li, Li; Chen, Zhenjie
2007-06-01
This article analyzes Kunming land use planning and management information system from the system building objectives and system building requirements aspects, nails down the system's users, functional requirements and construction requirements. On these bases, the three-tier system architecture based on C/S and B/S is defined: the user interface layer, the business logic layer and the data services layer. According to requirements for the construction of land use planning and management information database derived from standards of the Ministry of Land and Resources and the construction program of the Golden Land Project, this paper divides system databases into planning document database, planning implementation database, working map database and system maintenance database. In the design of the system interface, this paper uses various methods and data formats for data transmission and sharing between upper and lower levels. According to the system analysis results, main modules of the system are designed as follows: planning data management, the planning and annual plan preparation and control function, day-to-day planning management, planning revision management, decision-making support, thematic inquiry statistics, planning public participation and so on; besides that, the system realization technologies are discussed from the system operation mode, development platform and other aspects.
Hasker, E; Mpanya, A; Makabuza, J; Mbo, F; Lumbala, C; Kumpel, J; Claeys, Y; Kande, V; Ravinetto, R; Menten, J; Lutumba, P; Boelaert, M
2012-09-01
To enable the human African trypanosomiasis (HAT) control program of the Democratic Republic of the Congo to generate data on treatment outcomes, an electronic database was developed. The database was piloted in two provinces, Bandundu and Kasai Oriental. In this study, we analysed routine data from the two provinces for the period 2006-2008. Data were extracted from case declaration cards and monthly reports available at national and provincial HAT coordination units and entered into the database. Data were retrieved for 15 086 of 15 741 cases reported in the two provinces for the period (96%). Compliance with post-treatment follow-up was very poor in both provinces; only 25% had undergone at least one post-treatment follow-up examination, <1% had undergone the required four follow-up examinations. Relapse rates among those presenting for follow-up were high in Kasai (18%) but low in Bandundu (0.3%). High relapse rates in Kasai and poor compliance with post-treatment follow-up in both provinces are important problems that the HAT control program urgently needs to address. Moreover, in analogy to tuberculosis control programs, HAT control programs need to adopt a recording and reporting routine that includes reporting on treatment outcomes. © 2012 Blackwell Publishing Ltd.
Importance of Data Management in a Long-Term Biological Monitoring Program
NASA Astrophysics Data System (ADS)
Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.
2011-06-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-09-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond's checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles' structure was analyzed by Drummond's standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond's criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-01-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond’s checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles’ structure was analyzed by Drummond’s standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond’s criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies. PMID:29234174
A plan for the North American Bat Monitoring Program (NABat)
Loeb, Susan C.; Rodhouse, Thomas J.; Ellison, Laura E.; Lausen, Cori L.; Reichard, Jonathan D.; Irvine, Kathryn M.; Ingersoll, Thomas E.; Coleman, Jeremy; Thogmartin, Wayne E.; Sauer, John R.; Francis, Charles M.; Bayless, Mylea L.; Stanley, Thomas R.; Johnson, Douglas H.
2015-01-01
The purpose of the North American Bat Monitoring Program (NABat) is to create a continent-wide program to monitor bats at local to rangewide scales that will provide reliable data to promote effective conservation decisionmaking and the long-term viability of bat populations across the continent. This is an international, multiagency program. Four approaches will be used to gather monitoring data to assess changes in bat distributions and abundances: winter hibernaculum counts, maternity colony counts, mobile acoustic surveys along road transects, and acoustic surveys at stationary points. These monitoring approaches are described along with methods for identifying species recorded by acoustic detectors. Other chapters describe the sampling design, the database management system (Bat Population Database), and statistical approaches that can be used to analyze data collected through this program.
Dynamic programming re-ranking for PPI interactor and pair extraction in full-text articles
2011-01-01
Background Experimentally verified protein-protein interactions (PPIs) cannot be easily retrieved by researchers unless they are stored in PPI databases. The curation of such databases can be facilitated by employing text-mining systems to identify genes which play the interactor role in PPIs and to map these genes to unique database identifiers (interactor normalization task or INT) and then to return a list of interaction pairs for each article (interaction pair task or IPT). These two tasks are evaluated in terms of the area under curve of the interpolated precision/recall (AUC iP/R) score because the order of identifiers in the output list is important for ease of curation. Results Our INT system developed for the BioCreAtIvE II.5 INT challenge achieved a promising AUC iP/R of 43.5% by using a support vector machine (SVM)-based ranking procedure. Using our new re-ranking algorithm, we have been able to improve system performance (AUC iP/R) by 1.84%. Our experimental results also show that with the re-ranked INT results, our unsupervised IPT system can achieve a competitive AUC iP/R of 23.86%, which outperforms the best BC II.5 INT system by 1.64%. Compared to using only SVM ranked INT results, using re-ranked INT results boosts AUC iP/R by 7.84%. Statistical significance t-test results show that our INT/IPT system with re-ranking outperforms that without re-ranking by a statistically significant difference. Conclusions In this paper, we present a new re-ranking algorithm that considers co-occurrence among identifiers in an article to improve INT and IPT ranking results. Combining the re-ranked INT results with an unsupervised approach to find associations among interactors, the proposed method can boost the IPT performance. We also implement score computation using dynamic programming, which is faster and more efficient than traditional approaches. PMID:21342534
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, C W; Lenderman, J S; Gansemer, J D
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less
The Eruption Forecasting Information System (EFIS) database project
NASA Astrophysics Data System (ADS)
Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather
2016-04-01
The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Ribas, Laia; Pardo, Belén G; Fernández, Carlos; Alvarez-Diós, José Antonio; Gómez-Tato, Antonio; Quiroga, María Isabel; Planas, Josep V; Sitjà-Bobadilla, Ariadna; Martínez, Paulino; Piferrer, Francesc
2013-03-15
Genomic resources for plant and animal species that are under exploitation primarily for human consumption are increasingly important, among other things, for understanding physiological processes and for establishing adequate genetic selection programs. Current available techniques for high-throughput sequencing have been implemented in a number of species, including fish, to obtain a proper description of the transcriptome. The objective of this study was to generate a comprehensive transcriptomic database in turbot, a highly priced farmed fish species in Europe, with potential expansion to other areas of the world, for which there are unsolved production bottlenecks, to understand better reproductive- and immune-related functions. This information is essential to implement marker assisted selection programs useful for the turbot industry. Expressed sequence tags were generated by Sanger sequencing of cDNA libraries from different immune-related tissues after several parasitic challenges. The resulting database ("Turbot 2 database") was enlarged with sequences generated from a 454 sequencing run of brain-hypophysis-gonadal axis-derived RNA obtained from turbot at different development stages. The assembly of Sanger and 454 sequences generated 52,427 consensus sequences ("Turbot 3 database"), of which 23,661 were successfully annotated. A total of 1,410 sequences were confirmed to be related to reproduction and key genes involved in sex differentiation and maturation were identified for the first time in turbot (AR, AMH, SRY-related genes, CYP19A, ZPGs, STAR FSHR, etc.). Similarly, 2,241 sequences were related to the immune system and several novel key immune genes were identified (BCL, TRAF, NCK, CD28 and TOLLIP, among others). The number of genes of many relevant reproduction- and immune-related pathways present in the database was 50-90% of the total gene count of each pathway. In addition, 1,237 microsatellites and 7,362 single nucleotide polymorphisms (SNPs) were also compiled. Further, 2,976 putative natural antisense transcripts (NATs) including microRNAs were also identified. The combined sequencing strategies employed here significantly increased the turbot genomic resources available, including 34,400 novel sequences. The generated database contains a larger number of genes relevant for reproduction- and immune-associated studies, with an excellent coverage of most genes present in many relevant physiological pathways. This database also allowed the identification of many microsatellites and SNP markers that will be very useful for population and genome screening and a valuable aid in marker assisted selection programs.
CycADS: an annotation database system to ease the development and update of BioCyc databases
Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano
2011-01-01
In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551
External Data and Attribute Hyperlink Programs for Promis*e(Registered Trademark)
NASA Technical Reports Server (NTRS)
Derengowski, Rich; Gruel, Andrew
2001-01-01
External Data and Attribute Hyperlink are computer programs that can be added to Promis*e(trademark) which is a commercial software system that automates routine tasks in the design (including drawing schematic diagrams) of electrical control systems. The programs were developed under the Stennis Space Center's (SSC) Dual Use Technology Development Program to provide capabilities for SSC's BMCS configuration management system which uses Promis*e(trademark). The External Data program enables the storage and management of information in an external database linked to a drawing. Changes can be made either in the database or on the drawing. Information that originates outside Promis*e(trademark) can be stored in custom fields that can be added to the database. Although this information is not available in Promis*e(trademark) printed drawings, it can be associated with symbols in the drawings, and can be retrieved through the drawings when the software is running. The Attribute Hyperlink program enables the addition of hyperlink information as attributes of symbols. This program enables the formation of a direct hyperlink between a schematic diagram and an Internet site or a file on a compact disk, on the user's hard drive, or on another computer on a network to which the user's computer is connected. The user can then obtain information directly related to the part (e.g., maintenance, or troubleshooting information) associated with the hyperlink.
An editor for pathway drawing and data visualization in the Biopathways Workbench.
Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar
2009-10-02
Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.
ERIC Educational Resources Information Center
Kern, Joanne F.
The lack of opportunity for high school sophomores to learn database searching was addressed by the implementation of a computerized magazine article search program. "Reader's Guide to Periodical Literature" on CD-ROM was used to train students in database searching during the time they were assigned to the library to do research papers…
ERIC Educational Resources Information Center
George, Carole A.
This document describes a study that designed, developed, and evaluated the Pennsylvania school-district database program for use by educational decision makers. The database contains current information developed from data provided by the Pennsylvania Department of Education and describes each of the 500 active school districts in the state. PEP…
ERIC Educational Resources Information Center
National Center on Outcomes Research, Council on Quality and Leadership, Towson, MD.
This report describes the genesis, definition and use of the Personal Outcomes database, a database designed to assess whether programs and services are being effective in helping individuals with disabilities. The database is based on 25 outcome measures in seven domains, including: (1) identity, which is designed to provide a sense of how people…
West Virginia yellow-poplar lumber defect database
Lawrence E. Osborn; Charles J. Gatchell; Curt C. Hassler; Curt C. Hassler
1992-01-01
Describes the data collection methods and the format of the new West Virginia yellow-poplar lumber defect database that was developed for use with computer simulation programs. The database contains descriptions of 627 boards, totaling approximately 3,800 board. feet, collected in West Virginia in grades FAS, FASlF, No. 1 Common, No. 2A Common, and No. 2B Common. The...
NUCFRG2: An evaluation of the semiempirical nuclear fragmentation database
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Cucinotta, F. A.; Shinn, J. L.; Badavi, F. F.; Chun, S. Y.; Norbury, J. W.; Zeitlin, C. J.; Heilbronn, L.; Miller, J.
1995-01-01
A semiempirical abrasion-ablation model has been successful in generating a large nuclear database for the study of high charge and energy (HZE) ion beams, radiation physics, and galactic cosmic ray shielding. The cross sections that are generated are compared with measured HZE fragmentation data from various experimental groups. A research program for improvement of the database generator is also discussed.
Code of Federal Regulations, 2012 CFR
2012-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2011 CFR
2011-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2013 CFR
2013-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2014 CFR
2014-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2010 CFR
2010-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
ERIC Educational Resources Information Center
Breit-Smith, Allison; Cabell, Sonia Q.; Justice, Laura M.
2010-01-01
Purpose: The present article illustrates how the National Household Education Surveys (NHES; U.S. Department of Education, 2009) database might be used to address questions of relevance to researchers who are concerned with literacy development among young children. Following a general description of the NHES database, a study is provided that…
Building a genome database using an object-oriented approach.
Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud
2002-01-01
GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
Database resources of the National Center for Biotechnology Information
Sayers, Eric W.; Barrett, Tanya; Benson, Dennis A.; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M.; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D.; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A.; Wagner, Lukas; Wang, Yanli; Wilbur, W. John; Yaschenko, Eugene; Ye, Jian
2012-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:22140104
Database resources of the National Center for Biotechnology Information
2013-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page. PMID:23193264
Database resources of the National Center for Biotechnology Information.
Wheeler, David L; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Ostell, James; Miller, Vadim; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Steven T; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene
2007-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link(BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace and Assembly Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Viral Genotyping Tools, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
Database resources of the National Center for Biotechnology Information.
Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene; Ye, Jian
2009-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the web applications is custom implementation of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
Relationship marketing in health care.
Wagner, H C; Fleming, D; Mangold, W G; LaForge, R W
1994-01-01
Building relationships with patients is critical to the success of many health care organizations. The authors profile the relationship marketing program for a hospital's cardiac center and discuss the key strategic aspects that account for its success: a focus on a specific hospital service, an integrated marketing communication strategy, a specially designed database, and the continuous tracking of results.
Mentoring Program Enhancements Supporting Effective Mentoring of Children of Incarcerated Parents.
Stump, Kathryn N; Kupersmidt, Janis B; Stelter, Rebecca L; Rhodes, Jean E
2018-04-26
Children of incarcerated parents (COIP) are at risk for a range of negative outcomes; however, participating in a mentoring relationship can be a promising intervention for these youth. This study examined the impact of mentoring and mentoring program enhancements on COIP. Secondary data analyses were conducted on an archival database consisting of 70,729 matches from 216 Big Brothers Big Sisters (BBBS) local agencies to establish the differential effects of mentoring on COIP. A subset of 45 BBBS agencies, representing 25,252 matches, participated in a telephone interview about program enhancements for better serving COIP. Results revealed that enhanced program practices, including having specific program goals, providing specialized mentor training, and receiving additional funding resulted in better outcomes for COIP matches. Specifically, specialized mentor training and receiving additional funding for serving matches containing COIP were associated with longer and stronger matches. Having specific goals for serving COIP was associated with higher educational expectations in COIP. Results are discussed in terms of benefits of a relationship-based intervention for addressing the needs of COIP and suggestions for program improvements when mentoring programs are serving this unique population of youth. © Society for Community Research and Action 2018.
d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna
2009-06-01
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Neural Network Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi
2003-01-01
Full-scale flight-test pilot floor vibration is modeled using neural networks and full-scale wind tunnel test data for low speed level flight conditions. Neural network connections between the wind tunnel test data and the tlxee flight test pilot vibration components (vertical, lateral, and longitudinal) are studied. Two full-scale UH-60A Black Hawk databases are used. The first database is the NASMArmy UH-60A Airloads Program flight test database. The second database is the UH-60A rotor-only wind tunnel database that was acquired in the NASA Ames SO- by 120- Foot Wind Tunnel with the Large Rotor Test Apparatus (LRTA). Using neural networks, the flight-test pilot vibration is modeled using the wind tunnel rotating system hub accelerations, and separately, using the hub loads. The results show that the wind tunnel rotating system hub accelerations and the operating parameters can represent the flight test pilot vibration. The six components of the wind tunnel N/rev balance-system hub loads and the operating parameters can also represent the flight test pilot vibration. The present neural network connections can significandy increase the value of wind tunnel testing.
Web-based flood database for Colorado, water years 1867 through 2011
Kohn, Michael S.; Jarrett, Robert D.; Krammes, Gary S.; Mommandi, Amanullah
2013-01-01
In order to provide a centralized repository of flood information for the State of Colorado, the U.S. Geological Survey, in cooperation with the Colorado Department of Transportation, created a Web-based geodatabase for flood information from water years 1867 through 2011 and data for paleofloods occurring in the past 5,000 to 10,000 years. The geodatabase was created using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2. The database can be accessed at http://cwscpublic2.cr.usgs.gov/projects/coflood/COFloodMap.html. Data on 6,767 flood events at 1,597 individual sites throughout Colorado were compiled to generate the flood database. The data sources of flood information are indirect discharge measurements that were stored in U.S. Geological Survey offices (water years 1867–2011), flood data from indirect discharge measurements referenced in U.S. Geological Survey reports (water years 1884–2011), paleoflood studies from six peer-reviewed journal articles (data on events occurring in the past 5,000 to 10,000 years), and the U.S. Geological Survey National Water Information System peak-discharge database (water years 1883–2010). A number of tests were performed on the flood database to ensure the quality of the data. The Web interface was programmed using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2, which allows for display, query, georeference, and export of the data in the flood database. The data fields in the flood database used to search and filter the database include hydrologic unit code, U.S. Geological Survey station number, site name, county, drainage area, elevation, data source, date of flood, peak discharge, and field method used to determine discharge. Additional data fields can be viewed and exported, but the data fields described above are the only ones that can be used for queries.
Content and Accessibility of Shoulder and Elbow Fellowship Web Sites in the United States.
Young, Bradley L; Oladeji, Lasun O; Cichos, Kyle; Ponce, Brent
2016-01-01
Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants' decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs.
Overview of Faculty Development Programs for Interprofessional Education
Zorek, Joseph A.; Meyer, Susan M.
2017-01-01
Objectives. To describe characteristics of faculty development programs designed to facilitate interprofessional education, and to compile recommendations for development, delivery, and assessment of such faculty development programs. Methods. MEDLINE, CINAHL, ERIC, and Web of Science databases were searched using three keywords: faculty development, interprofessional education, and health professions. Articles meeting inclusion criteria were analyzed for emergent themes, including program design, delivery, participants, resources, and assessment. Results. Seventeen articles were identified for inclusion, yielding five characteristics of a successful program: institutional support; objectives and outcomes based on interprofessional competencies; focus on consensus-building and group facilitation skills; flexibility based on institution- and participant-specific characteristics; and incorporation of an assessment strategy. Conclusion. The themes and characteristics identified in this literature overview may support development of faculty development programs for interprofessional education. An advanced evidence base for interprofessional education faculty development programs is needed. PMID:28720924
ACToR Chemical Structure processing using Open Source ...
ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d
NASA Astrophysics Data System (ADS)
Hermanns, R. L.; Zentel, K.-O.; Wenzel, F.; Hövel, M.; Hesse, A.
In order to benefit from synergies and to avoid replication in the field of disaster re- duction programs and related scientific projects it is important to create an overview on the state of art, the fields of activity and their key aspects. Therefore, the German Committee for Disaster Reduction intends to document projects and institution related to natural disaster prevention in three databases. One database is designed to docu- ment scientific programs and projects related to natural hazards. In a first step data acquisition concentrated on projects carried out by German institutions. In a second step projects from all other European countries will be archived. The second database focuses on projects on early-warning systems and has no regional limit. Data mining started in November 2001 and will be finished soon. The third database documents op- erational projects dealing with disaster prevention and concentrates on international projects or internationally funded projects. These databases will be available on the internet end of spring 2002 (http://www.dkkv.org) and will be updated continuously. They will allow rapid and concise information on various international projects, pro- vide up-to-date descriptions, and facilitate exchange as all relevant information in- cluding contact addresses are available to the public. The aim of this contribution is to present concepts and the work done so far, to invite participation, and to contact other organizations with similar objectives.
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
Research and Design of Embedded Wireless Meal Ordering System Based on SQLite
NASA Astrophysics Data System (ADS)
Zhang, Jihong; Chen, Xiaoquan
The paper describes features and internal architecture and developing method of SQLite. And then it gives a design and program of meal ordering system. The system realizes the information interaction among the users and embedded devices with SQLite as database system. The embedded database SQLite manages the data and achieves wireless communication by using Bluetooth. A system program based on Qt/Embedded and Linux drivers realizes the local management of environmental data.
Enhanced DIII-D Data Management Through a Relational Database
NASA Astrophysics Data System (ADS)
Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.
2000-10-01
A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.
Comprehensive Routing Security Development and Deployment for the Internet
2015-02-01
feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for
The ADAMS interactive interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietscha, E.R.
1990-12-17
The ADAMS (Advanced DAta Management System) project is exploring next generation database technology. Database management does not follow the usual programming paradigm. Instead, the database dictionary provides an additional name space environment that should be interactively created and tested before writing application code. This document describes the implementation and operation of the ADAMS Interpreter, an interactive interface to the ADAMS data dictionary and runtime system. The Interpreter executes individual statements of the ADAMS Interface Language, providing a fast, interactive mechanism to define and access persistent databases. 5 refs.
Dynamic programming re-ranking for PPI interactor and pair extraction in full-text articles.
Tsai, Richard Tzong-Han; Lai, Po-Ting
2011-02-23
Experimentally verified protein-protein interactions (PPIs) cannot be easily retrieved by researchers unless they are stored in PPI databases. The curation of such databases can be facilitated by employing text-mining systems to identify genes which play the interactor role in PPIs and to map these genes to unique database identifiers (interactor normalization task or INT) and then to return a list of interaction pairs for each article (interaction pair task or IPT). These two tasks are evaluated in terms of the area under curve of the interpolated precision/recall (AUC iP/R) score because the order of identifiers in the output list is important for ease of curation. Our INT system developed for the BioCreAtIvE II.5 INT challenge achieved a promising AUC iP/R of 43.5% by using a support vector machine (SVM)-based ranking procedure. Using our new re-ranking algorithm, we have been able to improve system performance (AUC iP/R) by 1.84%. Our experimental results also show that with the re-ranked INT results, our unsupervised IPT system can achieve a competitive AUC iP/R of 23.86%, which outperforms the best BC II.5 INT system by 1.64%. Compared to using only SVM ranked INT results, using re-ranked INT results boosts AUC iP/R by 7.84%. Statistical significance t-test results show that our INT/IPT system with re-ranking outperforms that without re-ranking by a statistically significant difference. In this paper, we present a new re-ranking algorithm that considers co-occurrence among identifiers in an article to improve INT and IPT ranking results. Combining the re-ranked INT results with an unsupervised approach to find associations among interactors, the proposed method can boost the IPT performance. We also implement score computation using dynamic programming, which is faster and more efficient than traditional approaches.
Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra
2009-06-29
Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.
Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry
2003-05-01
This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
20 CFR 411.250 - How will SSA evaluate a PM?
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROGRAM Use of One or More Program Managers To Assist in Administration of the Ticket to Work Program... determine the PM's final rating. (c) These performance evaluations will be made part of our database on...
Data management of a multilaboratory field program using distributed processing. [PRECP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
The PRECP program is a multilaboratory research effort conducted by the US Department of Energy as a part of the National Acid Precipitation Assessment Program (NAPAP). The primary objective of PRECP is to provide essential information for the quantitative description of chemical wet deposition as a function of air pollution loadings, geograpic location, and atmospheric processing. The program is broken into four closely interrelated sectors: Diagnostic Modeling; Field Measurements; Laboratory Measurements; and Climatological Evaluation. Data management tasks are: compile databases of the data collected in field studies; verify the contents of data sets; make data available to program participants eithermore » on-line or by means of computer tapes; perform requested analyses, graphical displays, and data aggregations; provide an index of what data is available; and provide documentation for field programs both as part of the computer database and as data reports.« less
ESTree db: a Tool for Peach Functional Genomics
Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Stella, Alessandra; Milanesi, Luciano; Pozzi, Carlo
2005-01-01
Background The ESTree db represents a collection of Prunus persica expressed sequenced tags (ESTs) and is intended as a resource for peach functional genomics. A total of 6,155 successful EST sequences were obtained from four in-house prepared cDNA libraries from Prunus persica mesocarps at different developmental stages. Another 12,475 peach EST sequences were downloaded from public databases and added to the ESTree db. An automated pipeline was prepared to process EST sequences using public software integrated by in-house developed Perl scripts and data were collected in a MySQL database. A php-based web interface was developed to query the database. Results The ESTree db version as of April 2005 encompasses 18,630 sequences representing eight libraries. Contig assembly was performed with CAP3. Putative single nucleotide polymorphism (SNP) detection was performed with the AutoSNP program and a search engine was implemented to retrieve results. All the sequences and all the contig consensus sequences were annotated both with blastx against the GenBank nr db and with GOblet against the viridiplantae section of the Gene Ontology db. Links to NiceZyme (Expasy) and to the KEGG metabolic pathways were provided. A local BLAST utility is available. A text search utility allows querying and browsing the database. Statistics were provided on Gene Ontology occurrences to assign sequences to Gene Ontology categories. Conclusion The resulting database is a comprehensive resource of data and links related to peach EST sequences. The Sequence Report and Contig Report pages work as the web interface core structures, giving quick access to data related to each sequence/contig. PMID:16351742
Secure, web-accessible call rosters for academic radiology departments.
Nguyen, A V; Tellis, W M; Avrin, D E
2000-05-01
Traditionally, radiology department call rosters have been posted via paper and bulletin boards. Frequently, changes to these lists are made by multiple people independently, but often not synchronized, resulting in confusion among the house staff and technical staff as to who is on call and when. In addition, multiple and disparate copies exist in different sections of the department, and changes made would not be propagated to all the schedules. To eliminate such difficulties, a paperless call scheduling application was developed. Our call scheduling program allowed Java-enabled web access to a database by designated personnel from each radiology section who have privileges to make the necessary changes. Once a person made a change, everyone accessing the database would see the modification. This eliminates the chaos resulting from people swapping shifts at the last minute and not having the time to record or broadcast the change. Furthermore, all changes to the database were logged. Users are given a log-in name and password and can only edit their section; however, all personnel have access to all sections' schedules. Our applet was written in Java 2 using the latest technology in database access. We access our Interbase database through the DataExpress and DB Swing (Borland, Scotts Valley, CA) components. The result is secure access to the call rosters via the web. There are many advantages to the web-enabled access, mainly the ability for people to make changes and have the changes recorded and propagated in a single virtual location and available to all who need to know.
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., customized training, relocation services, information systems technology (computer systems, database... Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by minorities...
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., customized training, relocation services, information systems technology (computer systems, database... Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by minorities...
NASA Technical Reports Server (NTRS)
Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry
1997-01-01
This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.
Relational Database Technology: An Overview.
ERIC Educational Resources Information Center
Melander, Nicole
1987-01-01
Describes the development of relational database technology as it applies to educational settings. Discusses some of the new tools and models being implemented in an effort to provide educators with technologically advanced ways of answering questions about education programs and data. (TW)
75 FR 61765 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... September 17, 2010 (FR Doc. 201-023260), on page 57037, regarding the Black Lung Clinics Program Database... hours Database 15 1 15 20 300 Dated: September 28, 2010. Sahira Rafiullah, Director, Division of Policy...
Aging assessment of large electric motors in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villaran, M.; Subudhi, M.
1996-03-01
Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less
NASA Astrophysics Data System (ADS)
Parviainen, Ville; Joenväärä, Sakari; Peltoniemi, Hannu; Mattila, Pirkko; Renkonen, Risto
2009-04-01
Mass spectrometry-based proteomic research has become one of the main methods in protein-protein interaction research. Several high throughput studies have established an interaction landscape of exponentially growing Baker's yeast culture. However, many of the protein-protein interactions are likely to change in different environmental conditions. In order to examine the dynamic nature of the protein interactions we isolated the protein complexes of mannose-1-phosphate guanyltransferase PSA1 from Saccharomyces cerevisiae at four different time points during batch cultivation. We used the tandem affinity purification (TAP)-method to purify the complexes and subjected the tryptic peptides to LC-MS/MS. The resulting peak lists were analyzed with two different methods: the database related protein identification program X!Tandem and the de novo sequencing program Lutefisk. We observed significant changes in the interactome of PSA1 during the batch cultivation and identified altogether 74 proteins interacting with PSA1 of which only six were found to interact during all time points. All the other proteins showed a more dynamic nature of binding activity. In this study we also demonstrate the benefit of using both database related and de novo methods in the protein interaction research to enhance both the quality and the quantity of observations.
FOUNTAIN: A JAVA open-source package to assist large sequencing projects
Buerstedde, Jean-Marie; Prill, Florian
2001-01-01
Background Better automation, lower cost per reaction and a heightened interest in comparative genomics has led to a dramatic increase in DNA sequencing activities. Although the large sequencing projects of specialized centers are supported by in-house bioinformatics groups, many smaller laboratories face difficulties managing the appropriate processing and storage of their sequencing output. The challenges include documentation of clones, templates and sequencing reactions, and the storage, annotation and analysis of the large number of generated sequences. Results We describe here a new program, named FOUNTAIN, for the management of large sequencing projects . FOUNTAIN uses the JAVA computer language and data storage in a relational database. Starting with a collection of sequencing objects (clones), the program generates and stores information related to the different stages of the sequencing project using a web browser interface for user input. The generated sequences are subsequently imported and annotated based on BLAST searches against the public databases. In addition, simple algorithms to cluster sequences and determine putative polymorphic positions are implemented. Conclusions A simple, but flexible and scalable software package is presented to facilitate data generation and storage for large sequencing projects. Open source and largely platform and database independent, we wish FOUNTAIN to be improved and extended in a community effort. PMID:11591214
NASA Astrophysics Data System (ADS)
Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa
2016-04-01
We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.
Periodic inventory system in cafeteria using linear programming
NASA Astrophysics Data System (ADS)
Usop, Mohd Fais; Ishak, Ruzana; Hamdan, Ahmad Ridhuan
2017-11-01
Inventory management is an important factor in running a business. It plays a big role of managing the stock in cafeteria. If the inventories are failed to be managed wisely, it will affect the profit of the cafeteria. Therefore, the purpose of this study is to find the solution of the inventory management in cafeteria. Most of the cafeteria in Malaysia did not manage their stock well. Therefore, this study is to propose a database system of inventory management and to develop the inventory model in cafeteria management. In this study, new database system to improve the management of the stock in a weekly basis will be provided using Linear Programming Model to get the optimal range of the inventory needed for selected categories. Data that were collected by using the Periodic Inventory System at the end of the week within three months period being analyzed by using the Food Stock-take Database. The inventory model was developed from the collected data according to the category of the inventory in the cafeteria. Results showed the effectiveness of using the Periodic Inventory System and will be very helpful to the cafeteria management in organizing the inventory. Moreover, the findings in this study can reduce the cost of operation and increased the profit.
MetNetAPI: A flexible method to access and manipulate biological network data from MetNet
2010-01-01
Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943
Teleeducation and telepathology for open and distance education.
Szymas, J
2000-01-01
Our experience in creating and using telepathology system and multimedia database for education is described. This program packet currently works in the Department of Pathology of University Medical School in Poznan. It is used for self-education, tests, services and for the examinations in pathology, i.e., for dental students and for medical students in terms of self-education and individual examination services. The system is implemented on microcomputers compatible with IBM PC and works in the network system Netware 5.1. Some modules are available through the Internet. The program packet described here accomplishes the TELEMIC system for telepathology, ASSISTANT, which is the administrator for the databases, and EXAMINATOR, which is the executive program. The realization of multi-user module allows students to work on several working areas, on random be chosen different sets of problems contemporary. The possibility to work in the exercise mode will image files and questions is an attractive way for self-education. The standard format of the notation files enables to elaborate the results by commercial statistic packets in order to estimate the scale of answers and to find correlation between the obtained results. The method of multi-criterion grading excludes unlimited mutual compensation of the criteria, differentiates the importance of particular courses and introduces the quality criteria. The packet is part of the integrated management information system of the department of pathology. Applications for other telepathological systems are presented.